Arbitration and Artificial Intelligence: The Rise of Robo-Arbitrators
Arbitration and Artificial Intelligence: The Rise of Robo-Arbitrators
Muhammad Mustafa Arif / August 2025
Over time Artificial Intelligence (AI) has transitioned from the sideline to the heart of legal innovation, revolutionizing procedures from document review to judicial analytics. Arbitration, highly prized for its efficacy and adaptability, is presently undergoing a quiet but persistent rise of AI fortified systems, commonly referred to as “robo-arbitrators.”
The AI systems which range, from award drafting automation programs to the more advanced decision-making systems, may conceivably make proceedings more efficient and less expensive. But this move raises issues beyond logic. Fundamental legal principles, such as; procedural justice, party autonomy, transparency, and legitimacy of results, are under threat. As arbitration moves towards algorithmic adjudication, it is important to ask whether AI may properly harmonize with the due process guarantees embedded in international codes of law, especially the 1958 New York Convention.
Present and Emerging Practices
A recent survey by White and Chase identifies the rapid adoption of AI in legal practice with 64% of the participants reporting that they utilized AI tools for legal research during the period 2020-2025. In practice, AI-driven programs, such as Harvey AI—already employed by a number of major firms like Allen & Overy—have been capable of enhancing the speed and accuracy of legal document drafting. Similarly, programs like Lexis+ AI and CaseText’s CoCounsel provide expert legal rule application and case law guidance, thus streamlining formerly lengthy research and drafting processes into processes that take seconds.
Additionally, AI has transformed how disputes are evaluated and navigated. Predictive analytics tools such as Premonition can scan and aggregate vast quantities of data to determine arbitrator behavior patterns and therefore provide critical information to use in crafting case strategy. Meanwhile, the integration of natural language processing (NLP) functionality, allows real-time searching of solid arguments and precedents in lengthy legal documents.
The growing acceptance of AI reflects a significant shift in professional attitudes, positioning AI as a core element of arbitration rather than a supplementary tool. This transition highlights evolving notions of expertise and procedural rigor, alongside enhanced efficiency
Adoption of AI by Singapore International Arbitration Center (SIAC) and International Chamber of Commerce (ICC) provides further testimony of its embedding within the arbitration system. These innovations aim at making case filing, procedural management, and document distribution more efficient but not substitute human arbitration, with procedural fairness and accessibility being preserved.
The types of AI integration into arbitration can be conceptualized on a spectrum. At the supported end, AI assists human arbitrators in document review and preparation. Augmented systems continue further, providing probabilistic or strategic suggestions to guide human decision-making. The most extreme form—autonomous arbitration—is one where AI is the only decision-maker, a concept still very much theoretical but increasingly tried and debated in practice and in academia.
Compatibility with Procedural Fairness and Due Process:
In arbitration, arbitrators are bound by the constraints of due process requirements, a concept formulated in. 1354 by Magna Carta. The incorporation of AI in arbitration, effectively raises serious concerns about procedural justice and due process. Arbitrators are bound to maintain the basic values of equality, and right to be heard as enshrined in Article 18 of the UNCITRAL Model Law and enshrined in Articles V(1)(b) and V(1)(d) of the New York Convention.
Arbitral fairness is not procedural liberalism; rather, it requires that the decision be perceived as just by a fair-minded observer who takes into account the special facts of the case. This is imperative, since it not only assists in award drafting, but is also used to review evidence, calculate damages, and filter arguments.
A crucial drawback of using AI, is the “black box problem“, which represents the transparency deficit that produces results without explanatory reasoning. This directly conflicts with the fundamental principle enshrined under Article V(1)(d) of the New York Convention, where enforcement is denied if a party is unable to present its case in a proper manner or where the award is prejudiced by a lack of procedural transparency. Use of AI tools, can make it difficult for parties to fathom how factual or legal conclusions were reached – underscoring an extremely serious challenge to natural justice and procedural integrity.
Additionally, bias in training data often leads to discriminatory or unjust outcomes, further exacerbating the problem. International arbitration may include parties from highly distinct linguistic, cultural, and legal backgrounds, thus the biases evade the principle of equal treatment. Cary Coglianese’s warning is apt, AI systems poorly trained or inadequately tested can prolong entrenched inequalities, particularly where datasets lack jurisdictional diversity or cultural delicacy. Contrastingly, Article 18 of the Model Law, demands effective exposure to the tribunal’s reasoning, whether human or AI-based, thus these biases directly undermines it.
In this regard, the IBA Guidelines on Party Representation in International Arbitration further stress that fairness requires disclosure and procedural candor. There is a risk of violating due process, where parties are unaware that AI has played a role in evaluating arguments or rendering a decision, or where they are denied the opportunity to respond to AI-generated insights, even if unintentional. Failure to give parties the chance to address substantive issues or findings that were never properly “in play” undermines procedural legitimacy and renders an award vulnerable to annulment or unenforceability, this principle has been upheld in several case laws as well.
Such issues are indicative of an enormous void in the present arbitral system. While AI can improve efficiency and predictability, its uncontrolled application is hostile to core notions of fairness, reasonableness, and party involvement. For tribunals to guarantee legitimacy and enforceability under the New York Convention, procedural safeguards need to be introduced—providing transparency, bias screening, disclosure, and party autonomy—to ensure advancement is in sync with procedural justice.
Accountability and Legitimacy of AI-Arbitrated Awards:
At the outset, automation of arbitral functions seems technically impressive, however, it presents a layered crisis of legal accountability. The question arises, where the AI system renders a flawed award; who will bare the responsibility? Traditional modes of liability only hold human actors capable of forming intention. In AI, liability could on the developer, the arbitral institution, or the tribunal chair.
AI’s inability to form intent or grasp normative consequences fundamentally clashes with fault-based legal systems, which hinge on mens rea in tort and public law—an element no machine can satisfy. By operating solely through statistical correlation rather than moral deliberation, AI represents not just a new instrument but an entirely different class of adjudicator. This ontological gap defies existing regulatory frameworks and frustrates traditional avenues for post-award accountability.
In response, arbitral institutions and regulatory bodies have created certain ethical standards that align AI with values of fairness and legitimacy. The International Council for Commercial Arbitration (ICCA) and Silicon Valley Arbitration and Mediation Center (SVAMC) both reference transparency, explainability, and the need for human intervention. The Chartered Institute of Arbitrators’ 2025 Guideline demands that AI needs to be employed to assist and not take over the determining role. Ultimately, these tools emphasize that AI cannot replace the human arbitrator’s duty to exercise independent legal judgment, particularly in the reasoning and formulation of awards.
The issue of legitimacy directly affects enforceability under the New York Convention. Article V(1)(a) permits refusal of enforcement where a party is under “incapacity.” Though traditionally applied to legal or mental incapacity, this provision could extend to situations where decision-making is entirely delegated to autonomous systems, rendering party involvement effectively meaningless. It has been suggested that such procedural exclusion may, in substance, resemble incapacity even if not formally recognised as such.
Article V(1)(b), which safeguards due process, becomes critical where parties are unaware of the use of AI or cannot scrutinize its reasoning. Courts, increasingly interpret this provision as ensuring meaningful participation. The use of “black box” systems, that lack explainability, especially in rendering findings of fact or quantifying damages, may thus constitute a breach of the right to be heard.
Under Article V(1)(d), enforcement may be refused where the arbitration procedure departs from what the parties agreed, including the unnotified use of AI in core decision-making. There are warnings that such departures may constitute a breach of procedural expectations. Similarly, under Article V(1)(e), enforcement may be denied if an award is set aside at the seat—likely where courts reject machine-generated outcomes as incompatible with public policy.
Public policy objections under Article V(2)(b) remain unpredictable. While narrowly construed, some jurisdictions may refuse to enforce AI-generated awards lacking human oversight, citing violations of justice and legal responsibility. Given global disparities in attitudes toward AI, enforceability in cross-border settings remains uncertain.
The core challenge is not merely technical but normative. As Thomas Franck argues, legitimacy rests on fairness, accountability, and consent. When AI obscures reasoning, bypasses ethical scrutiny, or undermines procedural safeguards, it threatens this legitimacy. To preserve trust in arbitration and ensure enforceability, clear procedural boundaries, party consent, and transparency must guide the use of such technologies.
Possible Solutions and Future Directions
As AI becomes increasingly integrated into arbitration, safeguarding core procedural values demands clear constraints. Hybrid adjudicative models—where AI supports but does not replace human judgment—are now standard. Institutions such as SVAMC and CIArb have emphasized that AI must remain subordinate, with CIArb’s 2025 Guidance requiring prior disclosure and limiting AI to non-decisional roles.
Transparency is critical. To avoid reliance on unexplainable “black box” outputs, systems must be designed for traceability and auditability. The EU AI Act, effective 2026, treats adjudicative AI as “high-risk,” mandating human oversight, audit logs, and explainable outputs. Arbitration institutions could adopt similar requirements, ensuring all relevant AI settings and outputs are disclosed when materially relied upon.
Institutional ethics guidelines are also taking shape. SVAMC’s 2024 Guidance require disclosure, confidentiality safeguards, and the retention of human responsibility. CIArb has introduced model procedural orders reinforcing these obligations, and similar standards are expected from ICC, LCIA, and SIAC. A standardized AI Disclosure Protocol would further promote transparency without imposing inflexible regulation, enabling parties to exchange information about AI tools, functionality, and reasoning on request.
Regular ethical audits must also become routine to detect bias, opacity, or risks to procedural fairness, anchoring long-term accountability in line with global standards.
Conclusion:
While AI offers speed and analytical precision, it must not displace the foundational principles of fairness, consent, and enforceability enshrined in the New York Convention. Awards rendered through opaque or unsupervised AI risk legal challenge and diminished legitimacy. To align innovation with procedural integrity, the arbitral community must adopt hybrid models, mandate explainable AI, institutionalize soft law tools like disclosure protocols, and embed regular audits. Crucially, global coordination—through UNCITRAL, ICCA, WIPO, and national bodies—is essential to ensure AI enhances, rather than undermines, the integrity of arbitration.