21
AUG
2024

The Intersection of AI and International Arbitration: Promises and Pitfalls

Comments : 0

The Intersection of AI and International Arbitration: Promises and Pitfalls

Muhammad Siddique Ali Pirzada / August 2024

 

The rapid transformation of Artificial Intelligence (AI) from a mere catchphrase to a pivotal technological force has been nothing short of extraordinary. According to a recent study by Goldman Sachs, AI has the potential to automate approximately 25% of all occupational tasks. Notably, within the legal sector, this percentage escalates to an impressive 44%, heralding a profound paradigm shift.

 

Notwithstanding these indicators, the incorporation of AI into the routine operations of arbitration professionals has remained relatively limited. A 2021 study conducted by White & Case revealed that 49% of arbitration practitioners seldom or never utilize AI tools, such as data analytics or technology-assisted document review. Subsequent research in 2023 corroborated these findings, indicating that the levels of AI adoption have remained largely unchanged.

The practice of electronic data review has become ubiquitous, providing greater efficiency compared to traditional manual methods, particularly when handling large datasets. Prior to the review, data must be gathered and processed—such as converting documents into readable formats and eliminating duplicates—before being uploaded to a review platform like Relativity. The review process itself can vary, encompassing manual review, technology-assisted review (TAR), or generative AI-enabled review.

AI transcends basic document review tasks. It enhances document production through functions such as identifying pertinent documents, redacting sensitive information, and segregating privileged or confidential materials unsuitable for disclosure.

The deployment of AI raises critical ethical and procedural issues. Particularly concerning are issues surrounding (deep-) fakes in multimedia content, including videos, photos, and audio recordings. Existing institutional regulations and procedural laws lack explicit frameworks governing the use of AI in document review and the repercussions of its potential misuse in arbitration proceedings. Recognizing this gap, the Silicon Valley Arbitration and Mediation Centre (SVAMC) took a proactive step by releasing draft guidelines titled the SVAMC AI Guidelines on August 31, 2023, aimed at addressing these challenges.

Achieving consensus on AI’s utilization among diverse stakeholders in arbitration proceedings is crucial for ensuring equitable participation. Initially, the arbitral tribunal and involved parties are encouraged to define parameters governing AI’s application in the proceedings, including a mandate to disclose AI utilization, as stipulated in SVAMC AI Guideline 3.

SVAMC AI Guideline 1 emphasizes the responsibility of users to diligently understand and mitigate limitations, biases, and risks associated with AI tools. This necessitates a rigorous review process to verify the accuracy of AI-generated submissions. Additionally, SVAMC AI Guideline 4 assigns accountability to parties and their representatives for any uncorrected errors or inaccuracies in outputs generated by AI tools. Furthermore, SVAMC AI Guideline 4 prohibits the use of AI to fabricate evidence, undermine the authenticity of evidence, or mislead the arbitral tribunal and opposing parties. Nevertheless, certain procedural risks persist without resolution, including the potential omission or deliberate concealment of documents. These issues could impact fundamental principles like the right to a fair hearing and, in severe instances, result in violations of procedural rights.

The application of AI in the selection and appointment of arbitrators presents notable challenges, distinct from its debated use in jury selection. A primary challenge stems from conflicting objectives among stakeholders: institutional appointments prioritize fair, impartial, and rigorous arbitration processes, whereas parties generally favour arbitrators who align with their viewpoints and are predisposed to rule in their favour.

Another challenge arises from the substantial costs involved in training AI models, which can reach millions of USD. To mitigate these expenses, institutions or frequent users of arbitration could collaborate to develop such tools collectively. However, it remains uncertain whether institutions or parties would be willing to make significant investments in tools designed to streamline arbitrator selection and appointment processes. Specifically, there is scepticism about whether institutions and parties would collaborate on training an AI model for this purpose.

Apart from financial challenges, the accessibility of pertinent data for arbitrator selection—such as past decisions, personal viewpoints, and biases—is severely restricted, often not publicly available. Furthermore, AI models, which rely on statistical patterns derived from existing data, tend to perpetuate stereotypes observed in the historical composition of arbitral tribunals, including the portrayal of arbitrators as male, pale and stale.

The SVAMC AI Guidelines allow for the use of AI in researching potential arbitrators. However, they emphasize the importance of not solely relying on AI for arbitrator selection. Human input is necessary, and the AI tool’s selection process must be critically and independently evaluated to mitigate biases and other limitations, as outlined in SVAMC AI Guideline 1.

Decision-making stands at the forefront of every dispute, and AI can aid arbitrators significantly in this critical phase. However, it’s crucial to recognize that AI lacks cognitive reasoning abilities; its outputs rely on statistical probabilities derived from training data. Therefore, AI is most effective in contexts where there is abundant case law and where factual and legal scenarios are standardized and recurring, such as in cases involving product liability, false advertising, or insider trading with numerous affected parties.

In German courts, AI tools have been piloted for specific applications. For instance, the Regional Court of Frankfurt tested FraUKe (Frankfurter Urteils-Konfigurator Elektronisch) to aid in decision-making for mass cases involving air passenger rights claims. Similarly, the High Court of Stuttgart implemented OLGA (Oberlandesgerichtsassistent) to categorize appeal filings and customize template judgments based on specific details in mass litigation linked to the diesel emission scandal. Despite generally positive reception upon their introduction, a position paper resulting from a conference of several presidents of higher courts in Germany in 2022 suggests that AI is not ready to replace human decision-making processes in the near term. There are concerns about the potential for judges to gradually lose their discernment skills with increased reliance on these tools. Additionally, scholars advocate for transparency in AI algorithms to enable a comprehensive evaluation of relevant factors if automated decision-making processes are implemented.

Another application of AI in arbitration is case prediction or assessment. This use of AI is distinct from actual decision-making, as it primarily assists parties and counsel before the award is rendered.Established AI tools like LexMachina and Solomonic leverage vast databases of legal judgments to enhance case assessments by legal professionals. For instance, LexMachina facilitates analysis of courts, judges, opposing parties, and counsels, generating concise case summaries and predicting potential outcomes based on different legal strategies. Meanwhile, Solomonic, a UK-based litigation analytics platform, provides tailored updates on recent legal developments, highlights critical relationships, compiles actionable insights, and offers rapid, confidential access to court documents. Moreover, it aids trial preparation and forecasts case outcomes by analysing past behaviours, a crucial indicator for future legal actions.

The challenges associated with employing AI for decision-making in arbitration necessitate careful examination. A fundamental limitation arises from AI tools’ reliance on statistical data, which lacks the capability for legal reasoning and instead bases decisions on statistical probabilities. Therefore, AI renders decisions not on the basis of reasoned judgment but rather on the most probable outcome. This raises concerns about AI’s ability to accommodate the unique circumstances of individual cases. Furthermore, depending on historical data may stifle the evolution of law, as innovative legal arguments may rarely prevail. Additional concerns revolve around confidentiality commitments, a primary factor influencing parties’ preference for arbitration over litigation. When utilizing AI tools, parties and arbitrators must uphold the confidentiality of proceedings. They must also rigorously assess the source and reliability of historical training data to mitigate biases and avoid perpetuating previous errors. 

While AI shows promise in optimizing arbitration processes and cutting costs, it currently cannot replace the indispensable roles of lawyers and arbitrators. This limitation stems from the essential human expertise and judgment needed to navigate complex legal issues and ethical considerations, as well as the substantial costs associated with developing AI models. Therefore, the integration of AI in arbitration requires a comprehensive evaluation of its potential benefits and challenges. Parties and arbitral tribunals must carefully consider these factors early in the proceedings and establish a cohesive approach within the procedural framework. This underscores the critical importance of upholding foundational principles, as outlined in the SVAMC AI Guidelines.

About the Author

Leave a Reply

[popup_anything id="6965"]