| 19 Feb 2025

Mark Deem is the co-author of AI on Trial.

This is included in our Cyber Law Online Service.

As the title suggests, AI on Trial follows the same process as a High Court trial. Why do you think AI needs to be ‘placed on trial’?

AI has the potential to be one of the most powerful technologies that has ever been developed and it will undoubtedly impact on every single aspect of our lives. It remains far from clear, however, whether its development will be one that genuinely benefits society as a whole, or whether its implementation will be skewed towards certain demographics, stakeholders or individuals. This is too important to be left to chance.

Accordingly, the purpose of the book was to provide some critical thinking as to the challenges and opportunities presented by the technology and - through the device of the Court process - reach a view as to the legal, ethical and regulatory framework and critical guard rails we need to implement, all ‘without fear or favour, affection, or ill will’.

What would you say have been the biggest disruptors to AI and the law since the publication of the first edition of AI on Trial?

The first edition of AI on Trial was conceived and published almost six months before the early iterations of generative AI (especially ChatGPT) were introduced to the world. At the time we wrote the first edition, we were particularly concerned about the lack of education concerning AI and whether there was sufficient engagement with the wider public about how the technology worked and how it could be used. This appetite to engage with these discussions was always going to be fundamental to ensuring that trust and transparency was present in the way AI was developed. 

It is fair to say that generative AI has certainly captured the imagination of the world and has been particularly impactful in opening up awareness of the technology. Whether in our personal or professional lives we are now seeing and using generative AI tools to assist our work. This has the potential to change working patterns. 

However, as we publish the second edition of AI on Trial, the need for education about how the technology actually works has never been greater. Without this, we run the very real risk that we abdicate all responsibility to those developing and implementing AI in our lives and crucially we do not appreciate the ways in which it might not be working in our wider interests.

Do you think it’s possible to reliably use AI to assist with legal work in a responsible way?

AI should be thought of as a calculator or tool, rather than an encyclopaedia. Deployed correctly, with appropriate guard rails in place, it absolutely can be used in a responsible way in order to deliver legal services. However, there needs to be an urgent and fundamental understanding of the way in which the technology works – specifically, that it will respond to a question with the statistically most probable result, rather than qualitatively the ‘correct’ answer. The two will not necessarily be the same.

Maintaining a critically-thinking human in the loop – and being honest as to the ways in which AI is being used – will be vital for us to use AI to assist with delivering responsible legal services. We should be thinking of AI as meaning augmented intelligence – with the technology empowering us to provide greater insight.

What is the biggest threat that AI poses to legal practitioners?

Over-reliance on the output of the technology or using the technology simply to cut-corners (whether to achieve cost or time savings or increase profits) are challenges that will impact the wider business world even if they only deliver short term benefits. Our focus therefore needs to be on the medium to long term.

For legal practitioners, this means that we need to understand how we need to adapt our business models to ensure that we know how to train, develop and nurture the legal leaders of the future.  How do we ensure that the lawyers of tomorrow will be critically-thinking, value-delivering humans in the loop?

In our rush to implement the technology, to keep up with competitors and to deliver the cost and time savings demanded by clients, we must not lose sight of our needs as providers of legal services.  This will require not only an understanding of the underlying technology and how it operates but also an appreciation of how we can use it to augment our services both from a client-facing and an internal business point of view. We need to be active players, rather than by-standers. Placing AI on trial goes some way in contributing to this journey.

Addressing these challenges will require a comprehensive, proactive approach to cyber security, involving continuous risk assessment, investment in security technologies, and ongoing training and awareness programs for employees.