Who's Really Judging? AI in Arbitration at PAW 2026
Explore the boundary between AI assistance and independent judgement in international arbitration in this summary of TrialView's panel at PAW 2026.
ArticlesThe use of AI by Arbitrators and its potential impact on enforceability is one of the most significant issues facing International Arbitration. Simon Greenberg, Partner at Clifford Chance, Emily Hay, ArbBoutique and Co-Chair of the ICC Task force on AI, Anneliese Day, KC at Fountain Court Chambers and Laragh Lee of Trinity International, joined Stephen Dowling SC, founder and CEO of TrialView, for a discussion centred on common enforceability risks, including reasoning gaps, internal inconsistencies, procedural compliance, and jurisdiction-specific vulnerabilities.
A live demonstration of TrialView’s AI-based solution provided the catalyst for a very lively exploration of a core question; at what point does AI move from fact-checking into the territory of judgement?
The Uncontroversial Entry Points
The panel opened by acknowledging that certain uses of AI sit comfortably within existing practice. Deploying AI to summarise submissions, generate checklists, or identify gaps in a draft award bears a reasonable resemblance to work traditionally carried out by junior lawyers or tribunal secretaries. The efficiency case is straightforward: time is saved, consistency is improved, and accuracy can be enhanced.
The panel, however, was careful not to inflate that analogy. A junior lawyer, however inexperienced, operates within a professional framework of trained judgment, ethical obligations, and personal accountability. AI produces outputs without ownership. It can replicate the form of reasoning without necessarily engaging in reasoning itself. That distinction is manageable when AI is used to organise or fact-check material. It becomes considerably more significant when those outputs begin to shape how an arbitrator frames the case.
Where Assistance Becomes Influence
The panel explored this boundary through a concrete example, i.e., using AI to identify arguments not addressed in a draft award. Framed as a compliance check, the function appears innocuous, as an additional pair of eyes on completeness. Yet the moment that list begins to influence what the arbitrator treats as important, its role shifts. It is no longer merely checking but, in some sense, participating in the prioritisation of issues.
The same logic applies in document review. Asking AI to flag "relevant sections" is not materially different from asking it to identify "key issues." Relevance is not an objective category; it is inherently bound up with legal reasoning. The concern is not simply that an arbitrator might outsource a discrete decision, but that they may outsource the formation of the mental framework within which decisions are made. That is a subtler, and more serious, risk.
Transparency and Its Limits
The panel converged on transparency as a necessary condition for any legitimate use of AI in arbitration, though panellists were direct about its limitations. Disclosure of AI use raises immediate practical questions: what must be disclosed, and at what level of detail? The fact of use? The nature of the tasks? The specific prompts entered? There is a real risk that granular disclosure obligations generate satellite disputes that serve no one's interests.
The question of parity also surfaced. Where one party has access to sophisticated AI tools and the other does not, some disparity in capability may arise. Panellists noted that this is not entirely unlike existing asymmetries in legal resources, but acknowledged that the analogy is imperfect and that the profession should be attentive to how the gap develops.
The Limits of Delegation
Perhaps the most important point to emerge from the session was a clear articulation of what AI is not being asked to do and should not be. Arbitrators are not appointed to produce awards in a mechanical sense. They are appointed to decide. The drafting process is a cognitive and creative act, where arguments are tested, connections are made, and conclusions take shape through the act of writing itself. That process cannot be meaningfully delegated without altering the nature of the role.
The analogy to a tribunal secretary was raised and examined. Some panellists suggested that AI use might be treated similarly, requiring disclosure and potentially party consent. Others pushed back: unlike a secretary, AI is not a person, and its capabilities and limitations are far less easily defined or scrutinised.
An Unresolved Tension
The session did not produce settled best practice, nor did it claim to. What it did produce was a clearer sense of where the real tension lies, and not between those who favour AI and those who oppose it, but between the genuine efficiency gains AI can offer and the foundational expectation that an arbitrator has personally engaged with the material and exercised independent judgment.
Used carefully, AI has the potential to enhance the quality and robustness of decision-making. Used uncritically, it risks blurring the line between support and substitution. The panel was clear that this is not a binary choice between adoption and rejection, but a matter of professional discipline, in other words really understanding, at each stage, what AI is doing and why.
What emerged most forcefully from the session was that the fundamental obligation on arbitrators remains unchanged. The parties have appointed a decision-maker, not a process manager. Whatever tools are used in the course of that work, the judgment must be theirs, the reasoning must be theirs, and the accountability must be theirs. AI cannot share in that. The profession's task is to ensure that it does not obscure it either.
See TrialView in action
As one of the leading litigation platforms, TrialView is the perfect addition to your tech stack. Book a free demo to get started.
Book a demo




