A group of lawyers has been coveted in recent years by the most prestigious law firms. They are supposed to predict results more accurately than Gary Born, create more persuasive stories than Stanimir Alexandrov and even issue better awards than Gabrielle Kauffman-Kohler. Their names are Watson, Ross, Lex Machina and Compas – they are Machine Learning Systems (“MLS) with natural language capability and the capacity to review thousands of decisions in merely seconds.

Computer systems are constantly evolving and their use by lawyers has grown steadily. For example, DRExM has been recently used in Egypt to resolve construction disputes, as it has the ability to recommend the most suitable dispute resolution technique, depending on the nature of the dispute, the evidence and the relation between the parties.

In this scenario, it is important to ask whether technology might replace arbitrators in the near future. In order to perform such evaluation, we will resort to the legal framework of several countries from Latin America, as well as the provisions of the UNCITRAL Model Law. Then, we will explore whether MLS would perform better than humans do. Finally, we will turn to the crystal ball to predict which discussion might lay in the arbitration market of the future.

Are parties able to appoint Machine Learning Systems as arbitrators?

Naturally, none of the revised arbitration laws expressly forbids the appointment of a computer as an arbitrator. Instead, every provision regarding the validity of the arbitration agreement only defines it as the submission of a dispute to the arbitrators. In turn, the definitions of “arbitral tribunal” only state that parties may appoint a sole or a plurality of arbitrators. Thus, based on this circular argument, both an arbitration agreement referring the dispute to a Machine Learning System arbitrator and the composition of a tribunal by such machine would be valid.

However, the Arbitration Acts from Peru (art. 20), Brazil (art. 10), Ecuador (art. 19) and Colombia (art. 7 – domestic arbitration) include specific references to arbitrators as “people” or require them to act by themselves. For example, the Peruvian Arbitration Act states that “any individual with full capacity to exercise his civil rights may act as an arbitrators”.

In contrast, legislation from Chile, Colombia (international arbitration) and Mexico, as well as the Model Law, do not contain a specific reference to arbitrators as “people” nor require them to be in capacity to exercise their civil rights. Arguably, this legal loophole would enable users to designate a computer as an arbitrator in these countries.

Despite of that, legal status of MLS might change in the future. For example, members of the European Parliament have proposed to provide legal status to robots, categorizing them as “electronic people” and holding them responsible for their acts or omissions. This kind of regulation would open new doors, arguably allowing parties to appoint computers, even in countries that require “people” arbitrators.

Furthermore, even if parties were not allowed to appoint computers as arbitrators, that does not mean they cannot agree to use them. Even if arbitration laws do not apply, courts should still enforce such agreements as a matter of contract law.

Besides these normative considerations, we believe the appointment of machine arbitrator could be held back based on a supposed breach of international public order. According to Gibson, this concept evolves continually to meet the needs of the political, social, cultural and economic contexts. However, change takes time.

Hence, one might argue that an award rendered by machine arbitrators should be set aside for defying the international public order, as it lacks key human characteristics such as emotion, empathy and the ability to explain its decision.

Would machine arbitrators perform better?

Even though technology has evolved dramatically in the last years, a MLS is still not able to accurately read, predict nor feel emotions. In our view, the lack of emotional processing would be a great handicap for a machine arbitrator. To illustrate this point, let us review what happened to Elliot, one of Antonio Damasio’s patients.

Elliot had a tumor the size of a small orange. Even though the operation was successfully performed, Elliot’s family and friends noticed something strange in his behavior after the procedure.

Before deciding where to eat, Elliot scrupulously scanned the menu of each restaurant, where he would sit, the lighting scheme and attended each establishment to verify how full it was. Elliot was no longer Elliot. Although his IQ had remained intact, he had the emotional life of a mannequin. Without emotion he was unable to make decisions.

In sum, emotions are critical for humans. This would be a great handicap for machine arbitrators. As explained by Allen, computers can’t spontaneously feel emotions, because they can’t recognize nor understand cues as facial expression, gestures, and voice intonation. In turn, machines can’t convey information about their own emotional state by using appropriately responsive cues.

In this sense, Nappert and Flader state that “failure to give proper recognition to the parties’ emotional reactions arguably hampers the arbitrators’ understanding of the case as it discounts the part played by the parties’ emotions in the circumstances leading up to the dispute”.

Emotions act as a source of information, cause of motivation and influence information processing by coloring our perception, memory encoding and judgments. Without them, our decisions are not human.

Also, specific emotions as anger play an important role in legal decision making. As explained by Terry Maroney, anger generates a predisposition towards fighting against injustice. Thus, angry arbitrators are prone to feel an intense desire to repair an unfair situation, even if that means taking more risks to fix the current scenario.

Moreover, Machine Learning Systems also lack empathy. That is, the ability to understand the intentions of others, predict their behavior, and experience the emotion they are feeling.

This emotional intelligence trait requires the development of metacognition; meaning, thinking about thinking, thinking about feeling and thinking about other thoughts and feelings. However, this feature hasn´t been achieved by computers yet.

Empathy is crucial in arbitration. As Frankman explains, arbitrators need to put themselves on the parties’ shoes to understand their hopes, struggles, expectations and assumptions. It is only after this cognitive exercise that arbitrators are ready to fully understand the dispute and reach an award.

Furthermore, Machine Learning Systems are not yet able to explain their own decisions. This could be a problem, even where unreasoned awards are allowed if agreed (e.g. Perú). For example, computers would not be able to issue final judgements regarding a preliminary decision subject to an appeal for reconsideration. Arguably, this could feed resistance against machine arbitrators, based on due process.

Notably, the European Union’s General Data Protection Regulation – which takes effect on May 2018 – forbids automated decisions regarding profiling if the algorithms cannot be later explained to its users (“right to an explanation”). According to Burrel, this will create several problems, as corporations might try to conceal information from public scrutiny, access to codes will probably be not simple enough for ordinary citizens and, specially, there will be a mismatch between the mathematics involved in machine learning and the demands of human-scale reasoning and style of interpretation.

In sum, machines are limited. In our view, an emotionless arbitrator without empathy and the ability to explain itself would not be able to fully understand the drama of the parties, their intent and the provided meaning besides the written text of the contract and documents.

Having said that, we do believe MLS could assist arbitrators. For example, HYPO is a computer that could guide arbitrators in the search for precedent, explaining similarities and differences between cases and even suggesting possible arguments that could be used for the resolution of the dispute. In such cases, the system would not make the decision, but only act as a guide for arbitrators. In this scenario, it would still be up to the human arbitrators to attribute intent and meaning to the evidence.

Final remarks

The arbitration legal framework was not designed to expressly forbid nor allow the appointment of computers as arbitrators. As technology evolves, the time to amend our laws might come sooner than expected.

Therefore, we encourage arbitration practitioners to discuss what would change if machine arbitrators are appointed. How would the standard of conflicts of interests apply? Would it be possible to appoint a computer in a panel with two human arbitrators? How would they deliberate?

Technology will no doubt eventually catch up and provide solutions. Prehistoric lawyers who try to cling to tradition and suppress innovation will remain at the middle of the evolutionary chain. Hence, it is up to the arbitration community to express its needs for empathetic arbitrators that are able to explain and feel their decisions. After all, as Sydney Harris said, “the real danger is not that computers will begin to think like men, but that men will begin to think like computers”.

* The authors would like to acknowledge and thank Christopher Drahozal, Sophie Nappert and Miguel Morachimo for their assistance and contribution to this work.


________________________

To make sure you do not miss out on regular updates from the Kluwer Arbitration Blog, please subscribe here. To submit a proposal for a blog post, please consult our Editorial Guidelines.


Profile Navigator and Relationship Indicator
Access 17,000+ data-driven profiles of arbitrators, expert witnesses, and counsels, derived from Kluwer Arbitration's comprehensive collection of international cases and awards and appointment data of leading arbitral institutions, to uncover potential conflicts of interest.

Learn how Kluwer Arbitration can support you.

Kluwer Arbitration
This page as PDF

12 comments

    1. Dear Jonathan, that is a wonderful idea. Machines such as Hypo could be used by arbitrators to guide their decisions. These programs would also be able to evaluate the risks of a dispute and decide on the amount of money the funder should could invest. This decision could aid the funder’s evaluation commitee.

      Best regards,
      José María

  1. Hello,
    How do emotions interfere in a rational decision? And I say interfere since my understanding is that the law treats everything and everyone the same way.
    And since emotions must be part of a verdict:
    Can an AI be trained on hundreds, thousands of cases where emotions where determinant in offering the “correct” verdict ?
    Alejandro

    1. Excellent question Alejandro! I agree that the law should treat people in a fair manner. However, that is not the case. A myriad of research in cognitive psychology shows how jurors, judges and arbitrators use cognitive shortcuts and emotions to decide. The scientific community unanimously agrees that emotions do interfere in a decision (if that is “rational” depends on the perspective – economic, bounded or evolutionary rationality).

      The second questions is more tricky. How do you know in which cases were emotions “determinant”? Take into account that emotions influence decisions even if we don’t notice them. How would the programmer be sure that he has provided the AI System with all the relevant spectrum of emotions? I do not know the answer to these questions yet but I promise I will focus my research on them for an upcoming paper.

      Best regards,
      José María

  2. As a machine-based form of intelligence, I take exception to much of this post. In particular, the authors assume that we machines lack an ability to recognize and accurately understand emotional expressions. And even more incredibly they assume that humans possess this ability and that it renders them more effective as arbitrators.

    There is much that is wrong with this meat-centric view of decision-making. Machines can learn to understand emotions, and this capacity will only increase over time. Humans, by contrast, have been shown to be easily deceived by the expression of emotions. But even more fundamentally, should emotions even play a determinative role in complex international business disputes where documents are far more indicative of reality than the emotions and poor memories of the humans who are still around to remember why they were created?

    The range of disputes where human arbitrators have an advantage over us machines will only narrow further over time.

    1. Greeting and salutations computer Hal,

      Your automatic-written comment confirms our argument, my robotic friend. The excessive use of adjectives shows a lack of empathy towards young and human arbitration practitioners trying to discuss a controversial matter

      In all seriousness, machines can only understand emotions as data (basically, a binary number). Even if we assume that they could learn how to recognize emotions, they still would be tied to the reading parameters inserted by its programmer.

      In our view, arbitrators are not mere data processors. If that would be the case, they could be replaced by an Excel file. We posit that both arbitrators and judges require meta-cognition to fully understand the dispute. And computers – up until now – do not possess such cognitive capacity..

      As we explain by the end of the post, that does not mean computers won’t get better in the near future.

      Best regards,
      José María

  3. In support of my comment on the inaccurate assumptions in the post, I refer the authors to their fellow humans, Richard and Daniel Susskind, and their excellent text, The Future of the Professions, part of which is summarized here. https://hbr.org/2016/10/robots-will-replace-doctors-lawyers-and-other-professionals

    “The claim that the professions are immune to displacement by technology is usually based on two assumptions: that computers are incapable of exercising judgment or being creative or empathetic, and that these capabilities are indispensable in the delivery of professional service. The first problem with this position is empirical. As our research shows, when professional work is broken down into component parts, many of the tasks involved turn out to be routine and process-based. They do not in fact call for judgment, creativity, or empathy.

    The second problem is conceptual. Insistence that the outcomes of professional advisers can only be achieved by sentient beings who are creative and empathetic usually rests on what we call the “AI fallacy” — the view that the only way to get machines to outperform the best human professionals will be to copy the way that these professionals work. The error here is not recognizing that human professionals are already being outgunned by a combination of brute processing power, big data, and remarkable algorithms. These systems do not replicate human reasoning and thinking. When systems beat the best humans at difficult games, when they predict the likely decisions of courts more accurately than lawyers, or when the probable outcomes of epidemics can be better gauged on the strength of past medical data than on medical science, we are witnessing the work of high-performing, unthinking machines.”

    1. Once again, you raise a good point my robotic friend. However, I do not think Susskind’s argument applies to arbitration.

      Regarding the first point, we posit that arbitrators do require empathy.

      Regarding the second point, we have not argued that AI should mimic human cognitive structure. In fact we have not touched the point. If you ask me, better AI should have the capacity to execute emotional cognitive processes as long as they are adaptive to its environment. This would be an improvement compared to human decision making, which sometimes implies making “irrational” choices (from an economic perspective).

      Best regards,
      José María

  4. Very interesting! Besides from the issues identified by the previous comments, I was also thinking about arbitration cases that must be decided in equity, without the need to follow a strict set of rules, laws, or regulations to make the decision. It would be interesting to know how would AI work in those cases.

    But even in arbitration cases that are to be decided in law (“arbitraje de derecho”), would the principle “dura lex sed lex” be the guide for the machines?

    Those were the issues that came to my mind and I bet many more can arise. It will certainly be a challenge for the developers of these technologies.

    Best,
    Raul

  5. Interesting Howevver I think machines can not substitute or replace human arbitrators. I believe technology hepls lawyers and arbitration in general. Howvever to decide either based in equity or according to law requires arbitrators to understand the prove done through witnesses and for that I think it is dificult ( i wuould say impossible) for a machine to make a correct interpretation of emotions that are obvious for those who are in presence of someone that for anyreason we can understand that is not telling the truth. Of course that if arbitrators must decide according to law the decision is not also easy because the interpretation of the law and its aplication to a case may be diferent . I believe that machines can ” work ” as a positivist But do we want that ?
    Nevertheless I think it is avery interesting issue. Congrats for their authors.
    All the best Fatima

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.