The Bar Council has issued new guidance addressing the use of ChatGPT and other GenAI tools for barristers.
Date : 15/02/24
The Bar Council has issued new guidance addressing the use of ChatGPT and other generative artificial intelligence (AI) large language model systems (LLMs) by barristers. The guidance emphasises that while there is nothing inherently improper about employing reliable AI tools to augment legal services, practitioners must have a clear understanding of these tools and use them responsibly.
Key risks associated with LLMs, such as anthropomorphism; hallucinations; information disorder; and bias in data training, were highlighted.
Barristers are advised to:
verify LLM output, and maintain proper procedures for checking generative outputs, due to the potential hallucinations and biases
refrain from substituting professional judgement, quality legal analysis and expertise, with content generated by LLMs
exercise vigilance regarding sharing privileged or confidential information on any LLM system
assess generated content for potential intellectual property violations.
The guidance also recommends staying abreast of relevant Civil Procedure Rules, which, in the future, may implement rules/practice directions on the use of LLMs.
Sam Townend KC, Chair of the Bar Council, emphasised the inevitability of AI tools’ growth in the legal sector and urged barristers to understand these systems for controlled and ethical use.
The guidance, developed by the Bar Council’s IT Panel in consultation with the Regulatory Review Panel, aims to assist barristers in adhering to legal and ethical standards when incorporating LLMs into their practices. It concludes by noting that the guidance is subject to review, and practitioners should remain vigilant and adapt to changes in the legal and regulatory landscape. Importantly, the guidance is not considered legal advice and does not serve as ‘guidance’ for the purposes of the BSB Handbook 16.4.