Back to articles

Ethical concerns over AI use in law

by Stephen Grant

The growing use of AI technology and a lack of regulation on its use in business may lead legal professionals to inadvertently breach ethical obligations.

Yet, despite the growing prevalence of AI in legal practice, no formal legislation in the UK yet governs its use, raising concerns about the law's ability to keep pace with technological advancements. 

The UK government is taking a different approach from the EU to AI regulation. The UK has championed a sector-based framework, but as yet no specific legislation exists, only guidance. 

There are many potential disadvantages to employing bots such as ChatGPT when conducting legal research.

The emergence of ChatGPT has exposed the legal community to the many potential benefits of AI, including its various efficiencies. However, the legal profession tends to follow traditional ways of working, and often exhibits resistance to new ideas.

I anticipate that all lawyers will find some way to benefit from AI in the future, but we as a profession must address its potential misuse and associated ethical concerns.

For example, if you start using AI in mortgage applications, the system could learn discrimination from you. Areas of social deprivation that are frequently damaged, vandalised or undersold might automatically generate, without deliberate human prompting, the assumption that we need to increase premiums or charge more in those areas, potentially severely affecting those most vulnerable and at risk in our society.

The AI system itself lacks the inherent understanding that certain actions may be unfair or discriminatory, as its primary objective is to optimise and strive for the most favourable outcomes within its capabilities. It has no morals or values, and trying to programme AI networks with human emotions presents a distinct challenge.

Language model chatbot ChatGPT creates original text on request but comes with warnings that it can "produce inaccurate information”.

Data protection will prove to be an issue when it comes to using AI in the legal profession, and hacking incidents are not a matter of “if” but “when” – potentially exposing sensitive information that users have entrusted to the AI network. 

There are some obvious legal ramifications in this area. How do you sue an AI system? And who is really to blame – the users who input the data, or the creators of the AI software being used? Answering these important questions is increasingly complex.

There is no formal legislation surrounding the use of AI yet, and the law is slow to catch up with technology.

The counter argument to that however slow the law can be, could we not use AI to catch up more quickly? 


What are your experiences with using chatbots? 
I would love to hear your opinion on this. Feel free to contact me (srg@wjm.co.uk) to discuss this topic further.


Stephen Grant is a transactional corporate and commercial lawyer with both academic and industry knowledge of the computing and technology sectors. He has experience of advising clients in a variety of sectors (both public and private) in sizes ranging from listed blue chip companies to sole traders on a broad range of corporate matters.

30 November 2023

Wright, Johnston & Mackenzie LLP