LIDW24 – Quinn Emanuel A Judicious look at AI

Quinn Emanuel & TrialView: A Judicious Look at AI

 

 

Speakers: Sir Geoffrey Vos, John B. Quinn, Mrs Justice Joanna Smith, Elizabeth Wilson, Stephen Dowling

The legal profession stands on the cusp of a technological revolution, driven by the rise of artificial intelligence (AI) – This was the central theme of a recent panel discussion featuring prominent figures from both the judiciary and legal practice. The event explored the transformative potential of AI, its implications for the legal profession, and the balance between technological adoption and ethical considerations.

It’s all about Trust

The Master of the Rolls, Sir Geoffrey Vos, opened the discussion by addressing the critical question of confidence in AI, commenting that the adoption of AI in minor disputes, involving small sums, demonstrates a growing comfort with automated systems. However, he stressed that public trust must be maintained, especially in more serious matters, such as criminal sentencing and family matters, where human judgment remains crucial. Sir Geoffrey Vos pointed out that while AI can provide brilliant summaries and assist in disclosure, the legal system must ensure that justice is perceived to be delivered fairly and proportionately.

Rising expectations

Elizabeth Wilson emphasised the rising expectations from sophisticated clients who are increasingly aware of AI’s capabilities, noting that clients might question law firms who do not engage AI tools, which can efficiently produce high-quality legal summaries and insights. This expectation is further influenced by the judiciary’s use of AI, potentially setting a new standard for legal practice. Wilson argued that law firms must balance the adoption of AI tools with maintaining client trust and ensuring high quality legal services.

Training and Transition

The panel discussed how AI is reshaping legal education and the skills required for future lawyers. Highlighting the fact that young lawyers will be trained in a world where AI is integral, it was agreed that the nature of their roles, and the competencies they need to develop, will shift. This transition underscores the importance of preparing new lawyers to work effectively alongside AI, leveraging its capabilities while understanding its limitations.

Balancing Ethics and Efficiency

The discussion moved to the ethical implications of AI in legal practice. Whilst AI can save on both time and costs, the profession must also consider moral and ethical implications. Moreover, transparency is key; for example, if the public perceives decision-making as delegated to machines, it could fundamentally undermine trust in the judiciary.

AI and Empathy

Mrs Justice Joanna Smith highlighted the inability of AI to express empathy, a critical component of judicial and legal decision-making. While AI can assist with factual analysis, it was opined that human judges are essential for understanding the nuanced, emotional aspects of cases. This distinction could be deemed vital to ensure fair and compassionate outcomes.

Liability Shifting

The discussion shifted towards individual accountability, and the ongoing responsibility of lawyers to exercise caution with AI tools. Concerns about potential errors from AI-generated content inexorably led to a question on the liability of tech companies providing these tools; for example, if an AI tool generates an erroneous fact or summary, which is relied on during proceedings, does any liability rest with the tech company?

AI: Disputes in Decline?

Sir Geoffrey Vos concluded by addressing the future challenges and opportunities in AI adoption. He questioned whether factual disputes may decline in numbers, given that we have so much recorded data at our fingertips. John B. Quinn highlighted the ongoing need for human judgement, in light of the very subjective nature of evidence, and arguably there will always be different versions of an argument which fit within the boundaries of the data produced.

Final Thoughts

We are really only at the beginning of a challenging relationship with a very fast evolving technology. Whilst both the judiciary and practitioners are keen to regulate and curtail AI tools, there is a growing acceptance of the efficiencies to be gained, and a recognition that inertia may be the biggest obstacle if we are truly to progress and embrace new ways of navigating disputes.

LIDW24 – Arbitration Alchemy

Arbitration Alchemy: AI, Psychology and Decision-Making Dynamics

Speakers: Dr Ula Cartwright, Toby Landau KC, David Blayney KC, Myfanwy Wood, Stephen Dowling.

As part of an exciting week at LIDW 2024, TrialView’s director Stephen Dowling chaired a stimulating discussion on the psychological aspects of AI adoption in international arbitration, kindly hosted by Serle Court. He was joined by Toby Landau KC, Dr. Ula Cartwright-Finch, David Blayney and Myfanwy (Miffy) Wood, who each provided a unique view as to which cognitive biases affect legal decision-making and the potential of AI technology in reducing their negative effects.

Dr. Ula Cartwright-Finch opened the discussion by explaining the difference between unconscious bias and cognitive bias. While unconscious biases are automatic, unintentional judgements which humans make based on their own background and experiences; cognitive biases are systematic errors in thinking which can affect our decision-making and judgements. This often occurs in situations when brains are overwhelmed with enormous amounts of incoming information. Since we do not have an infinite capacity to process all of that information, we rely on mental shortcuts to help us analyse the information and to make decisions. The reliance on mental shortcuts can lead to flaws in our reasoning processes. By being unaware of such biases in professional environments, decision-makers run the risk of making unfair and unjust decisions.

Dr. Ula Cartwright-Finch explained that, in the context of legal decision-making, overconfidence bias, confirmation bias, anchoring bias, and recency bias are most commonly found. For example, overconfidence bias occurs when an individual overestimates their own abilities, knowledge and the accuracy of their predictions. Legal professionals may over-rely on their experience and expertise and thus be overconfident in the accuracy of an assessment, for example in the context of predicting case outcomes. During arbitration proceedings, which often span many months, the tribunal is expected to process and retain substantial amounts of information. This is where confirmation, anchoring and recency biases, which all relate to our memory, can heavily affect how tribunal members interpret information and ultimately reach their final decision on a matter. Confirmation bias can lead to individuals favouring, interpreting, and remembering information in a way that confirms their preexisting beliefs. Anchoring bias causes an individual to rely too heavily on the first pieces of information they process (ie. the ‘anchor’) which influences how they process all information that follows. This can cause them to make errors in any subsequent judgements by relying too heavily on the original ‘anchor’. Recency bias relates to individuals recalling recent events more easily and vividly, which causes them to have a greater influence over their decisions than events occurring earlier in the past.

Cognitive Biases in Practice

Due to his background as both an advocate and arbitrator, Toby Landau KC explained how the above-mentioned biases manifest themselves in the context of international arbitration proceedings. Arbitrations based on the adversarial model of adjudication, often associated with common law jurisdictions, prove particularly susceptible to cognitive biases. This is based on the assumption that the tribunal is unprepared; and hence, requires an extensive presentation of all of the facts about a case by the parties. In practice, this leads to an ‘information overload’ which the tribunal is expected to digest, analyse, and translate into a fair outcome. Many rely on creating a map of the case with ‘anchor points’ which aid in their comprehension of the information they are met with. In Toby Landau KC’s experience, it is usually at this initial stage of proceedings that the fate of a case may be determined. The ‘anchor points’ set by the tribunal within their case map influence how they perceive and analyse certain evidence and information. When individuals have a preconceived image of a case in mind, they are more likely to accept information which confirms their initial view. This occurs because the anchors set at the beginning of the proceedings shape their perspective of the case going forward. Consequently, they may become less inclined to revise or ‘remap’ their case understanding.

The tribunal’s decision-making may be particularly affected by cognitive biases at the stage of the post-hearing brief. Often occurring at a much later point in time to the initial proceedings, the tribunal’s memory of the case and its facts will naturally subside. At this stage, the tribunal most likely would refer back to the ‘anchor points’ they set initially, highlighting the importance of mental anchoring and adjustment (MAO). This increases the risk of their decision-making being affected by confirmation bias and other cognitive biases related to memory.

Another interesting point brought up by the panel was whether the ultimate goal of the legal profession should be to entirely eliminate cognitive biases. Myfanwy (Miffy) Wood highlighted that arbitral proceedings represent a process that both parties willingly enter and consent to. Depending on which parties you ask, some may view arbitration as a more reliable dispute resolution method, as opposed to litigating in front of domestic courts. Given the increasing awareness of how cognitive biases influence humans’ decision-making, other parties may choose to use this fact to their advantage; whether during the arbitrator selection process or main proceedings.

AI To The Rescue?

This leads us to an important question: (How) should AI technology be used to help legal professionals manage information overload? Is there a risk that using AI in this context could exploit existing human limitations, especially when it comes to setting the right ‘anchor points’? Would AI increase the risks related to cognitive biases?

Drawing on his background as both an advocate and founder of the LegalTech analysis and reasoning tool ‘Associo’, David Blayney KC illustrated how the concept of ‘decision hygiene’ should guide the development of AI solutions within this context. The awareness of biases, in combination with structured decision-making, which takes into account diverse perspectives, is integral to reducing the negative effect of cognitive biases. One practical adoption of AI tools could be the production of a visual, rather than written, map which sets out all disputed points in a case. This could aid arbitrators in analysing complex information and may reduce their reliance on mental shortcuts to synthesise the information in question. Opting for visual tools, rather than solely relying on written submissions, can help with reconciling each party’s submissions more effectively. Decision trees may also be an effective aid in this context. By breaking down a complex case into manageable parts, decision trees help visualise different potential outcomes and thereby allow the quantification of risk attributed to different points in an analytical manner. This may be particularly useful relating to overconfidence and confirmation bias.

As AI technology advances, we may find ourselves at a point in the future where, e.g. in relation to costs, parties may be increasingly scrutinised as to the reasons for not utilising technology to aid legal proceedings and decision-making. Ultimately, this illustrates the importance of transparency regarding the (non)use of AI technology, as this will have an influence on public trust in legal proceedings going forward.

Concluding Thoughts

Rounding off this exciting discussion, the panel collectively agreed that the ultimate focus of future AI solutions should be on aiding and supporting, rather than replacing, human decision-making. Nevertheless, there remain significant practical issues, notably relating to how an AI system is trained, which may influence the efficacy and legitimacy of adopting AI technology in its current form within international arbitration proceedings.

LIDW24 – AI in the Afternoon: Lessons in Adoption

AI in the Afternoon: Practical Lessons in Adoption

The “AI in the Afternoon” event, part of London International Disputes Week, brought together experts from Simmons & Simmons, Therme Group, and TrialView to discuss the practicalities of AI adoption in law firms. The discussions focused on AI’s transformative potential, practical strategies for implementation, and strategies for its adoption.

1. Current AI Tools for Legal Practice

  • Custom LLMs: Law firms are implementing large language models (LLMs), and examining how to use them for legal tasks, implementing them in ways with more control and security compared to publicly available AI tools like ChatGPT. These internal implementations of these models are designed to address the specific needs of legal professionals, ensuring appropriate use and confidentiality.
  • Integrated AI in Existing Software: AI functionalities are being incorporated into widely used tools like Microsoft Office (using Copilot), enhancing efficiency in tasks such as email drafting, generating starting points for documents in word or presentations in powerpoint, translations, search and data analysis.
  • Generative AI for Research: Legal knowledge providers are integrating AI to streamline research processes, allowing lawyers to generate comprehensive and grounded responses to prompts and to produce initial drafts of legal documents quickly.
  • Specialised AI Applications: AI is increasingly utilised in specialist areas such as e-Discovery and document management, significantly reducing the time required for reviewing and organizing large volumes of documents.

2. Practical Applications of AI in Legal Work

  • Summarisation: AI tools can generate first drafts of document summaries, aiding lawyers in quickly understanding and conveying complex information to clients.
  • Data Extraction: AI can be used to extract key information from long documents, and produce it in useful formats (e.g. chronologies, dramatis personae).
  • Content Evaluation: AI assists in comparing and analysing semantic differences between legal documents, such as witness statements, and identifying inconsistencies or areas of concern.
  • Document Management: Advanced AI applications facilitate efficient management and retrieval of information from large datasets, using natural language queries to provide answers more quickly, and with references cited where techniques like Retrieval Augmented Generation are used.

3. Security and Confidentiality Concerns

A major focus of the discussion was on the importance of maintaining security and confidentiality when using AI in legal contexts. Custom implementations solutions are essential to ensure that sensitive legal data is handled appropriately, mitigating many of the risks associated with general AI tools.

4. AI Adoption Strategies

Adopting AI in legal practice requires a strategic approach. Law firms must:

  • Start with AI Literacy: progress and sensible decisions will only be possible with a good level of understanding of how the technology works, the risks and available mitigations
  • Define Clear Objectives: Identify specific goals and challenges that AI can address before implementation.
  • Leverage Existing Knowledge: Apply lessons from past technology adoptions to streamline AI integration.
  • Balance Innovation and Caution: Avoid rushing into AI adoption without thorough planning and understanding of its implications.

5. Implications of AI Adoption

The potential impacts of AI on legal practice are significant:

  • Pricing Models: As AI automates more tasks, clients may come to expect outcome-based pricing models, and law firms may need to find ways to recover technology investment costs. The billable hour was justifiable when it was hard otherwise to find a basis for charging and measure the work done but now pricing for legal services will have to evolve.
  • Skill Sets: New expertise will be required, both by upskilling existing lawyers and hiring new talent with relevant skills.
  • Risk Appetite: Firms must balance cost efficiencies with risk management, aligning client expectations with realistic service delivery.
  • Market Differentiation: Larger firms may benefit from scale in investing in AI, while smaller firms might leverage agility for faster adoption. Firms with deep knowledge assets could create barriers to entry for others, solidifying client relationships.
  • Competitive Edge: The knowledge of ‘the law’ is already an assumed baseline. Quality of service, encompassing speed, efficiency, and commercial awareness, will drive competition.

6. Skills and Team Structure for Effective AI Use

Successful AI adoption depends on multidisciplinary teams that combine legal expertise with technological proficiency. Essential skills include:

  • Technical Literacy: Basic understanding of AI tools and their applications in legal work.
  • Data Science Knowledge: Ability to analyse and interpret data generated by AI tools.
  • Project Management: Effective coordination and management of AI projects to ensure they meet legal and business objectives.

Speakers included: Emily Monastiriotis, Jonathan Schuman, Stephen Dowling, Nicholas Cranfield and Eimear McCann

With thanks to Simmons and Simmons, who drafted this post, and who kindly granted us permission to share. The original post can be found here.

AI in International Disputes Practice: Breaking Ground or Old Hat?

Over the past year, generative artificial intelligence (Generative AI) in the form of large language models (LLMs) like ChatGPT-4 has taken the world by storm – and legal practice is no exception.  

Many will recall the headline-making story of a New York lawyer who was sanctioned by a judge for relying upon non-existent case law precedent that he obtained from ChatGPT-4 and did not double-check, as well as the Texas federal court judge who has implemented a standing order that requires all litigants appearing in his courtroom to make a certification concerning the use of Generative AI in their submissions. 

Yet, international arbitration practitioners have relied upon tools and arbitration management software powered by other forms of artificial intelligence (AI) for many years. Indeed, in an era marked by Big Data and an increasingly complex dispute resolution ecosystem encompassing broad document disclosure and evidentiary collection, the work of arbitration practitioners would be impossible to manage without AI. 

So, at a moment dominated by sexy headlines about the risks and opportunities presented by Generative AI in legal processes, this blog post takes a step back to examine the “old hat” AI-powered tools and applications that have long supported international arbitration practitioners with document management, document review, document production, and arbitrator due diligence, among other tasks, and concludes by introducing how Generative AI-powered tools may supplement and enhance the lawyer’s toolkit.  Discover more about how AI-powered tools and applications support international arbitration:

Managing and Producing Documents in the Age of Big Data 

Document review and production are often necessary evils in international arbitration practice. These processes are perceived as time-intensive and low-value grunt work. Moreover, they are often a sore point in the outside counsel and client’s relationship because the client may be unwilling to pay the usual rates for time spent on such work.  

This pressure point creates an opportunity to implement AI-driven document review tools to reduce the time and costs associated with labour-intensive document review and analysis. E-discovery platforms such as Relativity, Luminance, EverLaw, and CS Disco employ machine learning algorithms to categorise, extract, and analyse information from vast quantities of documents. 

Each platform has AI-driven functionalities that enable users to swiftly identify pertinent documents through conceptual search (in addition to keyword search), data visualisation, and document clustering, which expedites the overall document review process. 

Conceptual search in the context of e-discovery is a powerful and innovative approach to information retrieval that goes beyond traditional keyword-based searches. Unlike keyword searches, which rely on exact word matches, conceptual search utilises artificial intelligence and natural language processing to understand the underlying concepts and context within documents.  

Conceptual search is, therefore, useful in e-discovery as it enables legal professionals to uncover relevant documents even when specific keywords or phrases may not have been used, thus reducing the risk of missing critical information. 

One of the biggest challenges of document review and production is ensuring that the choices made are consistently reflected across all similar and duplicate iterations of that document. Here, AI-driven functionalities can drive efficiencies and consistency, thereby assuaging lawyers’ concerns over the risks of inconsistent instructions or understandings, or plain old human error. 

Data visualisation of related documents presents document relationships, patterns, and key information graphically and offers a visual narrative of the document corpus. This allows legal teams to quickly grasp the structure of the data, identify important trends, and pinpoint critical documents. Moreover, it aids in developing effective case strategies by revealing patterns and connections that may not be immediately apparent through traditional text-based analysis. 

These e-discovery platforms also cluster related documents by content or themes, enabling the review of groups of documents relevant to an issue or fact in the arbitration. Clustering also helps identify patterns, trends, or commonalities within a document corpus, which can be crucial for building a coherent legal strategy or uncovering hidden insights.  

Another benefit of clustering is ensuring that strategic decisions are uniformly and consistently applied to the overall document set, including decisions on the treatment of privileged, confidential, or non-responsive information. 

These tools enhance the speed, accuracy, and comprehensiveness of document review for production and disclosure, making them efficient at managing large volumes of electronic data in complex international arbitrations. As experienced counsel will know, these tools enable international arbitrations to unfold on faster and more efficient timetables than would be possible if their legal teams were stuck reviewing, categorising, and producing hard copy documents from within storage boxes or paper filing cabinets. 

End-to-end ODR Platforms 

A key feature of end-to-end online dispute resolution (ODR) platforms like New Era ADR is the ability to automate and streamline dispute resolution. AI algorithms facilitate the intake of cases, intelligently categorise them, and allocate resources efficiently.  

Parties can initiate arbitration proceedings seamlessly, guided by AI-driven prompts and tools, simplifying the complex legal processes associated with dispute resolution. This level of automation reduces administrative burdens and ensures that cases progress smoothly, saving both time and resources. These features may also democratise the dispute resolution process, making it more accessible for self-represented parties who may not have in-house legal expertise. 

Parties can track the progress of their cases in real-time, access relevant documents, and receive notifications through user-friendly interfaces. AI technologies underpin these features, ensuring that parties are well-informed about the status of their arbitration proceedings. This transparency fosters trust and confidence in the process, ultimately contributing to more equitable and satisfactory outcomes. 

Platforms like TrialView arbitration management software offer an AI-powered litigation workspace, with smart tools for legal teams to navigate a dispute’s full lifecycle. TrialView provides a centralised platform for uploading, managing and interrogating case data, smart bundling tools that permit a user to create a bundle in seconds (with automatic pagination, tabbing, and indexing).  

Hyperlinking and cross-referencing tools also run in tandem with in-built court compliance checks, and late insertion features offer greater flexibility. Each of these features can save a lawyer dozens of hours and prevent some of the last-minute stress (and overtime paralegal costs) that crop up on the eve of a hearing or trial. 

 Meanwhile, TrialView permits the same data to be examined and interrogated using AI intelligent search. A user can ask a question, and AI-powered tools will not only find the answer, but will direct the user to the exact excerpt, paragraph, and document.  

Entity search tools allow a user to find connections between key dates, actors, and events, with timeline building offering further insights. This capacity to really get to know the evidence is incredibly potent, removing the need to manually trawl through paper files for specific facts and data.  

Aside from the time and cost savings, these tools allow legal teams to spend more time doing ‘real’ legal work – including focus on strategy, corroboration, and persuasion, as well as allowing time to consider counter arguments and counter approaches. 

 Another example of AI in action is TrialView’s witness statement preparation and deposition creation tool, which allows users to record the interview, generate an automatic transcript, and convert it to a hyperlinked witness statement. 

 Finally, on the presentation side of things, TrialView offers smart evidence presentation tools that allow all parties to follow in real-time, with annotation and highlighting tools available to mark up documents and the hearing/trial transcript as the evidence unfolds.  

Parties may also follow remotely using an integrated video platform and team members in different locations can use built-in AI tools to discern potential material that can corroborate or undermine the propositions being advanced in the hearing/trial room itself with great speed. 

AI-powered Legal Research Platforms 

AI-powered legal research platforms have become indispensable tools in international arbitration. They provide arbitration lawyers with the technology to search for relevant precedents, jurisprudence, and legal sources across multiple jurisdictions. 

Platforms like Kluwer Arbitration and Jus Mundi harness machine learning to compile and enrich their extensive databases of international arbitration cases, treaties, conventions, and related legal documents. AI power offers great benefits to researchers as it underlies the technology that prepares case citations and helps create cross-links and references between various content sets. 

CaseText also employs an AI-driven approach, leveraging natural language processing to extract valuable insights from legal texts. Its platform also offers the capability to analyse and summarise legal research results and draft legal memoranda. 

LexisNexis and Westlaw, established names in the legal research landscape, also integrate AI into their platforms to enhance research capabilities by providing predictive analytics to suggest relevant cases, statutes, and secondary sources based on user queries. 

Interestingly, with the rise of Generative AI, many of these legal research platforms have unveiled new Generative AI-powered LLM chatbots to enable the legal researcher to engage in a question-and-answer exchange to facilitate the legal research journey. Such tools supercharge the legal researcher’s user journey and experience. They are especially valuable to both newer legal researchers (such as students) or those who aim to understand a new area of law very quickly.  

However, one must not forget that the data and data enrichment that legal researchers encounter have benefitted from AI-driven tools for many years already. 

AI-enabled Machine Translations 

International arbitration is characterised by its inherently cross-border nature. Parties frequently come from different cultural and linguistic backgrounds, and evidence and testimony may be in multiple languages.  

Despite this added complexity, the usual goal of effective communication and understanding of legal content is paramount to achieving equitable dispute resolution. In this context, AI-powered machine translation has emerged as a game-changing technology, offering advanced linguistic capabilities that transcend traditional language barriers. 

DeepL, renowned for its neural machine translation technology, and Google Translate, a widely accessible and versatile translation service, represent two exemplary platforms that leverage AI and deep learning techniques to deliver precise and context-aware translations of legal documents and communications translations. 

DeepL, Google Translate, and similar AI-driven translation platforms are indispensable assets in international arbitration. They ensure that all participants can effectively engage in the process regardless of language differences, promoting fairness and impartiality in dispute resolution. 

Again, such AI-powered tools are not new to international arbitration practice but have become commonplace in light of the frequency with which parties and their counsel must operate across languages and linguistic barriers. 

Conflict Management and Arbitrator Diligence 

International arbitration often involves complex disputes with multinational parties and necessitates a rigorous approach to arbitrator selection and conflict management. Various services have emerged to offer arbitrator profile and conflict-checking tools, including Arbitrator Intelligence, Kluwer Arbitration’s Profile Navigator & Relationship Indicator, and Global Arbitration Review’s Arbitrator Research Tool (ART). 

Each tool uses a combination of AI, data analytics, and self-reported information to provide comprehensive insights into arbitrators’ performance and track records of arbitrators. This data helps arbitration practitioners identify suitable candidates for arbitrators based on empirical data rather than subjective assessments.  

Further, engaging in this process may add diversity to the prospective arbitrator pool by bringing additional candidates to the practitioner’s attention who meet the case criteria but who may otherwise not be known by the selecting counsel. Overall, these tools provide data-driven inputs to the arbitrator selection process and promote transparency and objectivity in the arbitrator selection process. 

On the other hand, these tools may also introduce subjective insights. In some instances, the data collected may include candid feedback from counsel who have appeared before those arbitrators in particular cases. While these insights would be of a different nature from objective data-driven insights, they may prove equally useful in helping practitioners identify arbitrators who meet their needs and who may otherwise have been unknown to them. 

AI-driven Data Analytics for Third-Party Disputes Funding 

Predictive data analytics has become a powerful tool in litigation finance and third-party funding, transforming legal professionals’ strategic decisions. Notable platforms in this space include Lex Machina and Arbilex. 

By aggregating all data from documents filed on court dockets and leveraging AI along with human legal expert review to structure the data, Lex Machina provides insights like the quantum of damages, potential case resolutions, opposing counsel’s litigation history, and timing of the proceedings to enable predictions on various aspects of their cases. 

Similarly, Arbilex employs machine learning algorithms to analyse historical case data, legal precedents, and financial metrics. This enables third-party funders to assess the potential risks and rewards of funding a particular case.  

Burford Capital, a third-party funder, enhances its legal finance modelling with AI. However, it acknowledges the limitations of AI due to the confidentiality of about 90% of commercial disputes, which are resolved through settlement. Despite this, Burford considers that integrating AI can improve assessment accuracy and speed by analysing factors like profitability and outcome likelihoods. However, the effectiveness of AI models depends on the quality of available data, highlighting the difficulty of relying solely on AI tools in legal finance. 

These and similar solutions enhance decision-making within international arbitration, serving as a valuable resource for legal teams dealing with cross-border disputes. These tools analyse historical case data to provide insights into settlement probabilities and potential third-party funding opportunities and enable arbitration practitioners to make informed decisions and negotiate settlements more effectively. 

Burford Capital also uses AI to originate new business and identify potential cases by enhancing the process of case identification for investment. They leverage AI to combine public data with insights from past successful investments, employing heuristics and prompting techniques. This approach helps Burford to identify lawyers and cases that meet their investment criteria and discover instances where businesses have suffered harm but are unaware of their strong claims. Specifically, Burford has initiated projects to scrape the web for lawyers with specific profiles related to successful case types, thereby streamlining the process of finding new investment opportunities and assisting businesses in recognizing valuable claims. 

Is There Still Room to Break Ground? 

Notwithstanding the number of AI-driven tools and providers that dispute resolution practitioners are already familiar with and frequently use, there is still ample opportunity, within these same arenas, for innovation driven by Generative AI-powered technology.  

Eliza, a coauthor of this post, is also the Co-Founder of Lawdify, a solution that creates intelligent systems (AI agents) to run specialised, laborious, and high-stake tasks for legal professionals. She believes that the new generation of AI legal technology platforms like Lawdify will leverage the capability of the LLMs as the “superbrain” to process semantics and to contextualise and connect relevant concepts within legal documents so that a layer of true intelligence is added to the voluminous documentary records that disputes lawyers need to manage, navigate, and learn.  

These techniques will enable disputes lawyers to quickly generate work products like chronologies, dramatis personae, lists of issues, lists of relevant facts, and cross reference each to the underlying evidentiary documents in seconds.  

Lawyers will be able to promptly retrieve a document based on conceptual and semantic search (without the need to anticipate specific keywords) with Natural Language Processing. They could also “pivot” the underlying evidentiary record (just like a pivot table in Excel) according to their needs based on a variety of parameters, for example, display the list of documents supportive of a particular fact that bolters the arguments of the claimant on a specific legal issue and sorted chronologically. Indeed, LLMs are proficient at doing this. 

The next frontier of AI-powered legal technology solutions could create AI agents that understand the objective of a task and are capable of autonomously running them end-to-end, such as tagging for privilege for responsive documents in a production exercise, creating a privilege log, retrieving relevant and material documents in response to production requests. 

With respect to reliability and accuracy, AI-first legal technology companies like Lawdify will adopt techniques borrowed from data science to provide high scores in answer faithfulness (answers based on a real and not made-up fact in the corpus), and answer relevance.  

They will set up guardrails like providing sources of underlying documents, creating a record of the reasoning behind each action taken by an AI agent and of the considerations leading to such action (like generating a list of documents that were not responsive to a specific request), implementing evaluation stacks to benchmark the answers against ones provided by human lawyers, and a rigorous user feedback loop to collect and monitor user comments and actions. 

Concluding Thoughts 

AI-powered tools are far from new in international arbitration practice. Indeed, for many years, the robust practice and procedural approaches that have become commonplace have been driven by the efficiencies and opportunities that AI-powered tools have enabled. 

Yet, Generative AI remains poised to profoundly transform how international arbitration work is performed and offered. Its potential is vast, offering enhanced capabilities in legal research, document review automation, and even altering traditional billing models.  

The promise of AI-driven end-to-end online dispute resolution platforms could revolutionise how arbitration is approached, particularly for low-value commercial disputes. It has the potential to rebalance how arbitration is practised, opening doors to democratising access to the arbitral process for users who may be self-represented and without in-house legal expertise. 

In all events, just like the “old hat” AI-powered tools that have facilitated international arbitration as it exists today, lawyers must be among the early adopters of Generative AI-powered tools, finding ways to test and hone new skill sets to enhance their practices and offer greater value to their clients. 

Find out more about how AI can make a difference to you 

There is no denying that AI is becoming increasingly essential in the world of disputes. At TrialView, our award-winning AI powered platform and arbitration management software is trusted by law firms across the globe.  

From document review and production tools TrialView’s innovative software enables you to drive efficiencies in every aspect of your case. From eBundling and case preparation through to hearing and evidence-presentation features, TrialView empowers legal professionals to focus on outcomes.  

If you would like to learn more about how AI and arbitration management software can support you, reach out to info@trialview.com for an overview of our AI capabilities. Alternatively, why not book a tailored demo to see TrialView in action? 

*The opinions and insights presented in this post solely represent the authors’ views. They are not endorsed by or reflective of the policies or positions of their affiliated firms or organisations 

Elizabeth Chan (陳曉彤)
Elizabeth is a Registered Foreign Lawyer at Tanner De Witt in Hong Kong, specialising in international arbitration, litigation and restructuring and insolvency. Elizabeth has worked in arbitration at Allen & Overy (Hong Kong), Three Crowns (London), and Herbert Smith Freehills (New York and Hong Kong). She is ranked as a Future Leader of Who’s Who Legal Arbitration (2022-2024), the Legal 500 Arbitration Private Practice Powerlist: UK (2022) and Legal 500’s inaugural Arbitration Powerlist – Hong Kong (2023).

Kiran Nasir Gore
Kiran is an Arbitrator, dispute resolution consultant, and counsel in Washington, DC, with fifteen years experience in public and private international law, international development, foreign investment strategies, international dispute resolution, and legal investigation and compliance efforts. Her online newsletter, Law & Global (Dis)Order,  analyses issues at the intersection of international law, dispute resolution, business and technology; it has several thousand subscribers in over 30 different countries.

Eliza Jiang
CEO and Founder of Lawdify, a venture-backed AI-first SaaS solution creating AI agents to run specialised, laborious, and high-stake tasks for legal professionals. She is an advocate of AI and technology transformation in knowledge professions and in harnessing the power of AI to systematise professional services. She has over 10 years of experience as an international arbitration lawyer including experience in Toronto, New York, Hong Kong, and Singapore. She also acts as an independent arbitration practitioner.

 

The Devil Lies In The Detail: Regulating The Use Of Gen AI

With AI litigation continuing to develop and evolve, it is becoming increasingly common. But what are the latest developments and how can Generative AI in litigation be regulated? Elizabth Chan, Kiran Nasir Gore, and Eliza Jiang investigate in this post.  

Over the past year, the power of generative artificial intelligence (Generative AI) has taken the world by storm. Today, nearly every digital tool and platform is advertising a new Generative AI feature to help users to better organise their tasks, streamline and process information, conduct research and learn more about various topics, and write more efficiently and effectively.  

Legal processes are not immune from these developments, and a global debate has emerged on whether and what role Generative AI-powered tools should play in the legal work performed by dispute resolution specialists.  

As this blog post demonstrates, the devil lies in the details. While Generative AI-powered tools can make litigation and arbitration teams more efficient and effective, regulations, such as disclosure or certification requirements, can help (or hinder!) ethical, fair, and responsible use of these tools and a level-playing field for all parties participating in these proceedings. This post explores these latest developments. 

The Need to Regulate the Use of Generative AI-Powered Tools in Dispute Resolution Practice 

Using Generative AI-powered tools in the work of dispute resolution specialists presents many challenges and risks.  

These tools can be opaque, and it may be challenging for users to understand precisely what they do, how they work, and what happens to the information and data users input. These circumstances create the potential for severe consequences for misinformed or underinformed users, including professional conduct violations or breaches of confidentiality and/or attorney-client privilege.  

Even more, where disputes, such as international arbitration cases, involve cross-border elements, the laws and regulations of multiple jurisdictions may apply. Indeed, in the multi-jurisdictional context, it may be even more urgent to either harmonise or regulate standards of use for Generative AI-powered tools to help ensure procedural fairness. 

The BCLP 2023 survey of 221 arbitration professionals revealed that a significant majority (63%) support regulating disputing parties’ use of Generative AI-powered tools in international arbitration proceedings. This consensus suggests that there are risks associated with non-regulation.  

This is underscored when one considers the importance of the documents that international arbitration practitioners may work on, including legal submissions, expert reports, and arbitral awards – each of which must be precise, accurate, and coherent. However, while baseline regulation itself is an important first step to engaging with this technology, it is equally vital that the developed regulatory framework is adaptable and forward-looking. 

Guidelines and Principles for Legal Practitioners 

The Silicon Valley Arbitration and Mediation Center’s (SVAMC) Draft Guidelines on the Use of AI in Arbitration (Draft Guidelines) stand out as the only cross-institutional guidelines (to date) tailored explicitly for international arbitration contexts.  

The SVAMC Draft Guidelines were prepared with contributions from a committee (including Elizabeth, a co-author of this blog post) and propose a nuanced approach to the disclosure of when AI has assisted in preparing legal work product.  

It is important to note that the SVMAC Draft Guidelines define “AI” broadly. While their immediate focus is on the Generative AI-powered tools that are also the focus of this blog post, the Draft Guidelines refer to “AI” generally and aim to go even further in hopes of remaining evergreen and thereby capturing the regulation of AI-based technologies and tools that may not yet be developed. 

The SVAMC Draft Guidelines recognise that the need for disclosure may vary, suggesting that, in some instances, the AI technology being used may be straightforward and uncontroversial (e.g., technology-aided document review (TAR)), thus not requiring explicit disclosure.  

However, the Draft Guidelines also allow for the possibility that arbitral tribunals, parties, or administering institutions might demand disclosure of the use of Generative AI-powered tools, especially when such use could significantly influence the integrity of the arbitration proceedings or the evidence presented within it. 

The AAA-ICDR Principles for AI in ADR (AAA-ICDR Principles) and the MIT Task Force on the Responsible Use of AI in Law (MIT Principles) provide additional sets of guidelines and principles on the use of AI in legal practice. The AAA-ICDR Principles emphasise that AI should be used in alternative dispute resolution (ADR) cases, including arbitrations, in a manner that upholds the profession’s integrity, competence, and confidentiality. They do not specifically address disclosure requirements.  

Meanwhile, the MIT Principles, which are applicable more broadly within legal contexts, highlight the importance of ethical standards, including confidentiality, fiduciary care, and the necessity for client notice and consent, indirectly suggesting a framework where disclosure of AI use might be required under certain conditions to maintain transparency and trust.  

These various guidelines and principles collectively underscore the evolving landscape of AI in legal practice and emphasise the need for careful consideration of when and how AI-powered assistance should be disclosed. These guidelines and principles also share the core tenet that the integrity of legal work and fairness in the dispute resolution process must be upheld. 

Regulation, Disclosure, or Self-Policing? 

Different jurisdictions are approaching the need to disclose the use of AI assistance in preparing legal work products differently, and a spectrum of regulatory philosophies and practical considerations is emerging.  

For example, in the United States, a Texas federal judge has added a judge-specific requirement for attorneys to not only certify that their court filings if drafted with the assistance of a Generative AI-powered tool, were also verified for accuracy by a human but also take full responsibility for any sanction or discipline that may result from improper submissions to the court.  

This approach demonstrates a policy-based choice. The objective is not to prevent the use of Generative AI-powered tools in litigation practice but rather to allocate risk, maintain the integrity of the materials put before the court, and ensure that attorneys remain ultimately responsible for those materials.  

Interestingly, the template certification provided by the judge does not necessarily require an attorney to disclose whether they have used Generative AI-powered tools to prepare their legal submissions, only that, in case such tools were used, a human attorney has verified the submission and the attorney takes full responsibility for its contents.  

As such, the certification requirement is not very different from the already existing obligation on the attorney of record to diligently oversee that all submissions presented to the court are of the appropriate quality. 

Meanwhile, the Court of King’s Bench in Manitoba, Canada, has adopted a more prescriptive disclosure practice, mandating that legal submissions presented to the court also provide disclosure of whether and how AI was used in their preparation. However, it does not mandate the disclosure of use of AI to generate work products often used to analyse cases, such as chronologies, lists of issues, and dramatis personae, upon which legal submissions may rely. 

On the other hand, New Zealand and Dubai represent contrasting models of disclosure obligations. New Zealand’s guidelines for lawyers do not necessitate upfront disclosure of AI use in legal work. Rather, they focus on the lawyer’s responsibility to ensure accuracy and ethical compliance, and disclosure of specific use of AI-powered tools is required only upon direct inquiry by the court. This approach prioritises the self-regulation of legal practitioners while maintaining flexibility in how AI-powered tools are integrated into legal practice. 

In contrast, the Dubai International Financial Centre (DIFC) Courts recommend early disclosure of AI-generated content to both the court and opposing parties. Such proactive disclosure is viewed, in that context, as essential for effective case management and upholding the integrity of the judicial process. 

On the other side of the bench, some jurisdictions have unveiled guidelines for using Generative AI-powered tools by courts and tribunals. New Zealand and the UK now provide frameworks for judges and judicial officers. These guidelines emphasise the importance of understanding Generative AI’s capabilities and limitations, upholding confidentiality, and verifying the accuracy of AI-generated information. In principle, neither jurisdiction’s guidelines require judges to disclose the use of AI in preparatory work for a judgment. 

Potential Regulation of AI Use in Arbitrator Identification and Selection 

The drafting of legal submissions and arbitral awards are not the only areas where AI-powered tools may be integrated into an international disputes practice. AI-powered tools may also play a role in identifying and shortlisting arbitrators. This application carries potential implications for diversity and fairness in arbitrator selection.  

Typically, neither parties nor institutions must disclose their reasons for appointing particular arbitrators or the process they undertook to shortlist candidates. However, disclosure may be relevant where AI-powered tools are used to identify and potentially select arbitrators, given the biases and risks inherent in AI training tools and datasets. 

Indeed, there are relevant parallels between the arbitrator selection process and general recruitment processes, as both involve evaluating and selecting candidates for specific roles. Legislative steps, such as New York City Local Law 144 (New York Law 144), regulate the use of AI-powered tools in recruitment, highlighting the importance of transparency and accountability in AI-assisted candidate selection processes.  

New York Law 144 requires Automated Employment Decision Tools (AEDT) to undergo annual bias audits to ensure fairness and transparency. Similarly, the European Union’s concerns, as expressed by the Permanent Representatives Committee, underscore the need for careful regulation of AI in selection processes to protect individuals’ career prospects.  

While audits of AI databases and algorithms can help identify and rectify any inadvertent biases, completely eliminating diversity-related biases remains a significant challenge. For instance, Google’s recent efforts to subvert racial and gender stereotypes in its Gemini bot encountered backlash, illustrating the complexity of addressing biases without introducing new issues. 

Conclusion 

Integrating Generative AI-powered tools into the work of litigation and arbitration teams has prompted new conversations on regulatory measures, including disclosure and certification requirements, to ensure their ethical and fair application.  

The SVAMC Draft Guidelines, the AAA-ICDR Principles, and the MIT Principles each present exemplar frameworks for the responsible use of Generative AI-powered tools, emphasising transparency, accountability, and ethical standards. Moreover, various jurisdictions have adopted different approaches to disclosure or certification requirements, thereby demonstrating a range of policy-driven priorities.  

However, collectively, these recent developments signal a critical juncture in the legal profession’s engagement with Generative AI, stressing the need for adaptable, forward-looking regulatory frameworks that uphold the integrity and fairness of legal processes. 

Looking for further information about AI in litigation? 

For additional information about the use of AI in the legal profession, including AI litigation and Generative AI in dispute resolution, contact TrialView to learn about our award-winning AI powered platform 

Trusted by law firms around the world, our Open AI offering enables you to ask questions, build timelines, detect patterns, and make connections at speed so you can compare statements, depositions, and case documents, as well as uncover inconsistences.  

Plus, with eBundling, case preparation, and hearing services, TrialView empowers you to work with speed and efficiency so you can work smarter, not harder. If you would like to learn more about how litigation AI can support you, reach out to info@trialview.com or book a tailored demo to see TrialView in action. 

*The opinions and insights presented in this post solely represent the authors’ views. They are not endorsed by or reflective of the policies or positions of their affiliated firms or organisations. 

Elizabeth Chan (陳曉彤)
Elizabeth is a Registered Foreign Lawyer at Tanner De Witt in Hong Kong, specialising in international arbitration, litigation and restructuring and insolvency. Elizabeth has worked in arbitration at Allen & Overy (Hong Kong), Three Crowns (London), and Herbert Smith Freehills (New York and Hong Kong). She is ranked as a Future Leader of Who’s Who Legal Arbitration (2022-2024), the Legal 500 Arbitration Private Practice Powerlist: UK (2022) and Legal 500’s inaugural Arbitration Powerlist – Hong Kong (2023).

Kiran Nasir Gore
Kiran is an Arbitrator, dispute resolution consultant, and counsel in Washington, DC, with fifteen years experience in public and private international law, international development, foreign investment strategies, international dispute resolution, and legal investigation and compliance efforts. Her online newsletter, Law & Global (Dis)Order,  analyses issues at the intersection of international law, dispute resolution, business and technology; it has several thousand subscribers in over 30 different countries.

Eliza Jiang
CEO and Founder of Lawdify, a venture-backed AI-first SaaS solution creating AI agents to run specialised, laborious, and high-stake tasks for legal professionals. She is an advocate of AI and technology transformation in knowledge professions and in harnessing the power of AI to systematise professional services. She has over 10 years of experience as an international arbitration lawyer including experience in Toronto, New York, Hong Kong, and Singapore. She also acts as an independent arbitration practitioner.

 

Next Gen Disclosure: a look at recent approaches in the CAT and the role of AI.

In “Better Together: Creative Case Management by the CAT”, Edward Coulson, Lindsay Johnson and India Fahy explored a number of creative approaches to case management that have been adopted by the UK’s specialist Competition Appeal Tribunal (‘CAT’) in seeking to grapple with the explosion of the number of damages actions before it.

In this guest post for TrialView, the team at BCLP look at how case management by the CAT is fast-developing and far from settled in its approach.

INTRODUCTION

As the CAT seeks to grapple with the challenge of resolving multiple cases relating to the same infringement, almost invariably including a flurry of individual claims, as well as collective actions, it has become increasingly apparent that, when it comes to disclosure, no one size fits all and the CAT is also feeling its way to the correct approach to managing disclosure in complex damages actions.

In this article, we explore a number of different approaches to disclosure recently adopted by the CAT and how advancements in technology may be leveraged by the CAT to overcome disclosure challenges.

APPROACHES TO DISCLOSURE

Competition damages actions often involve very complex disclosure issues, as a result of factors such as the number of parties and issues involved, the historic nature of the conduct concerned, and the availability of documentary evidence and data. Such cases typically involve extremely voluminous disclosure and concerns have been expressed by the CAT and by parties on all sides about the disproportionate cost of providing and reviewing disclosure.

It is fair to say that the CAT has seen a need for close case management in such cases and has taken a ‘hands-on’ approach to managing disclosure. In the CAT’s Disclosure Ruling in the ‘First Wave’ of Trucks , it was explained that the CAT will tailor disclosure orders to what is proportionate in each individual case and disclosure in a damages claim such as Trucks requires close case management by the CAT. The Disclosure Ruling outlined certain broad principles that are applied by the CAT, including for example that disclosure will only be ordered if it is limited to what is reasonably necessary and proportionate bearing in mind a number of aspects of the particular action. It is evident from the evolution of different approaches to disclosure that the courts are willing to explore all options for resolving cases efficiently and reducing the enormous cost of disclosure.

The range of options being explored in competition cases can perhaps be best observed by looking at the differences between the approach of Smith J, President of the CAT, in Genius Sports in the High Court, which the President has described as a regime of “over-inclusive” disclosure, and the approach currently being trialled in the CAT, which the President has described as “a non-disclosure-based process”.

Genius Sports

In Genius Sports, Smith J ordered a bespoke regime involving an “over-inclusive” approach to disclosure of documents between the parties, with only “unequivocally irrelevant and privileged documents” to be excluded from the disclosure exercise, leaving the receiving party to review the documents itself. Smith J made this order on the basis that there was a risk that under a standard approach, relevant documents would be omitted, and on the basis that “massive over-disclosure” “no longer gives rise to the “real risk that the really important documents will get overlooked.. rather, the electronic filtering of documents gives rise to the real risk that really important documents are not looked at by any human agent”.

Smith J’s explanation of the process can be summarised as follows:

  • Each party would identify to the other precisely what documents would be subject to an electronic search and would swear an affidavit identifying custodians, repositories and collections of documents to be searched, together with any date ranges that would be applied to exclude or include material;
  • In defining the universe of documents to be searched, each Producing Party should err on the side of over-inclusion;
  • The object of the electronic review is to filter out documents that are irrelevant on the Peruvian Guano test, not to identify relevant documents;
  • Each Receiving Party should be fully informed as to the nature of the electronic review that has been conducted;
  • At the conclusion of each party’s review, there will be a corpus of documents that exclude the unequivocally irrelevant and a further review to filter the documents further on grounds of relevance should not take place;
  • In order to identify privileged material, to be excluded from disclosure, Smith J indicated that whilst an “eyeball” review would be best, it’s unlikely to be feasible and accordingly what is likely to be appropriate is an electronic search targeted specifically at the identification of privileged material, which is then reviewed by a human agent.
  • There should be no filtering on grounds of confidentiality – confidential material must be produced to the Receiving Party. Smith J indicated that confidential material would be protected in a number of ways, beyond CPR 31.22 (the rule applicable in High Court proceedings that prohibits the party receiving disclosure from making collateral use of it ), including, for example, through the use of auditable access to disclosure platforms, with parties obliged to keep a record of who accesses what document and when.

 

In a recent speech, Smith J reflected on this approach and remarked that:

“The point is that whilst everybody trusts an “eyeball” review by a professional and regulated team, no-one trusts the other side’s electronic search processes. And for good reason: the algorithmic “black boxes” that exist now (AI; concept grouping; etc. – keywords are so passé!) are robust in different ways in terms of the generosity or otherwise of their relevant document production, and the party receiving disclosure is entitled to understand how well the process has worked.”

In the particular case before him, Smith J concluded that the solution, which “is not a one size solution – and some would say it is not a solution at all”, was the receiving party being permitted to run the process themselves and carry out whatever searches they wish, understanding that excessive costs would not be recoverable.

“Non-disclosure-based process”

Smith J has explained the impetus for what he terms the ‘non-disclosure-based process’ as being that, in competition cases, “what we want is data without the disclosure. Probably collated by experts, from materials held by the parties, but using a non-disclosure-based process”.

Such an approach has recently been adopted in a number of cases before the CAT, including in the Boyle v Govia and McLaren v MOL collective actions and Smith J has suggested that this approach, which is being trialled in the CAT, could be rolled out more broadly. In both cases, Smith J expressed significant doubt about the efficiency and proportionality of traditional disclosure exercises in collective actions and instead stressed the importance of an expert-led approach, with experts being provided with “data and information, not reams of documents (whether paper or electronic) that must be sifted and analysed and turned into usable data and information”. In such a case, Smith J explained that one party’s expert should articulate the need for data or information, and for that data or information to be produced by the expert on the other side. The CAT recognised that it may be necessary for an ‘audit’ to be called for but, in the first instance, data should be regarded as reliable. Such an approach does not exclude the disclosure of documents but rather shifts the focus to disclosure of data and information, through an expert-led process.

From an order for “massive over-disclosure” in Genius Sports to trialling a “non-disclosure-based process” in the CAT would seem to be quite a jump but it is apparent that Smith J is prepared to explore a range of options to identify the most efficient approach in a given case. For example, both Boyle v Govia and McLaren v MOL are collective actions, which makes claimant side disclosure particularly challenging and justifies a different approach to disclosure being employed.

THE EXPANDING ROLE OF TECHNOLOGY

The options at both ends of the spectrum raise very significant considerations for parties to competition litigation. In the case of “massive over-disclosure”, parties will be concerned to ensure that, for example, privileged material is protected from disclosure, and in the case of “non-disclosure”, parties will be concerned to ensure that the process of selection and production of the data is transparent and robust.

Technology of course already plays an integral role in disclosure in almost all major litigation before the UK courts but how may developments in artificial intelligence and machine-learning be leveraged to assist the CAT in grappling with the challenges before it and the parties in managing the approaches to disclosure ordered?

Predictive coding, a form of artificial intelligence (‘AI’) tool, emerged in the early 2010s and our firm, BCLP, was successful in the first contested application before the High Court for disclosure to be carried out using predictive coding in 2016.

In the years since, there has been a rapid acceleration of uptake in the acceptance and use of AI and data analytics in litigation and significant growth in data volumes and evolution of data types. As we have seen data volumes grow and types evolve, there has also been a spike in the number of bespoke software applications being created to try and tackle challenges faced by parties in litigation, as well as refinement and improvement of current feature-rich tools and algorithms to improve the enrichment and throughput of the results. For example, we have seen the use of Continuous Active Learning (known as CAL) assist in managing the review of extremely voluminous datasets in a defensible manner. CAL is a process of active learning, where key and relevant documents are prioritised by the technology for further review by human agents, with documents that are similar to those that have been coded by human agents as not relevant being de-prioritised.

As the parties grapple with challenges posed by creative disclosure case management approaches ordered by the courts, parties will likely turn to the wide range of technologies for cost-effective and proportionate solutions. For example, a party concerned about protecting privileged information from disclosure under an ‘over-inclusive’ disclosure process will not be reliant on more traditional options, such as keyword searches, and can instead explore options such as pattern recognition and concept grouping to assist with the process. Concept grouping is a very powerful tool when it comes to identifying documents which are likely to either be relevant or not relevant.

At the other end of the spectrum, parties grappling with the challenge of the ‘non-disclosure-based process’ will similarly need to find new ways of ensuring that the data they receive has been robustly selected and produced. In McLaren v MOL , the President suggested that the parties may wish to consider the use of a “data consultant… Someone – an organisation that is retained by all of the parties to assist in the synthesis of data”, to ensure that data has been properly produced and is reliable. As parties, and the courts, become increasingly reliant on technology and data, might we ever reach a point when it is necessary or desirable for the Tribunal to have, as it often has an expert economist on the panel, a data expert on its panel?

Until recently, when people have referred to the use of AI in disclosure, such references have typically been to the use of tools such as predictive coding and CAL. However, it is inevitable that this is only the beginning. Companies are increasingly storing data in a manner which results in larger, unstructured, datasets. This presents challenges for human agents seeking to conduct cost effective reviews of disclosure, without the assistance of AI tools to sort and interpret the data.

This is particularly so, as Smith J has alluded to, in competition cases where millions of datapoints are used to model overcharges and rates of pass-on, for example. As the CAT becomes increasingly focused on data, it is inevitable that parties will need to turn to increasingly advanced new forms of AI to increase the efficiency and efficacy of disclosure reviews.

AI already has and will continue to change the way in which parties approach disclosure in complex damages actions and will be an important tool for the CAT in seeking to alleviate costs issues, whilst preserving the integrity of the disclosure process. It is not inconceivable, for example, that cognitive computing evolves to the point at which the CAT may itself ask a form of AI technology a natural language question about what the data shows or what data is most relevant to a particular issue.

As technologies develop, we may see an increased need for data experts and data consultants to assist both the parties and the CAT with adjusting to new ways of approaching disclosure issues.

Jason Alvares
Senior Forensic Technology Manager
Business and Commercial Disputes
Tel: +44 (0) 20 3400 3272

India Fahy
Associate
Antitrust and Competition
Tel: +44 (0) 20 3400 2250

Edward Coulson
Partner
Antitrust and Competition
Tel: +44 (0) 20 3400 4968