On the horizon: Italian justice’s initial approach to AI

11 Ottobre 2021
Thanks to Francesca Giordanelli for collaborating on this article

Predictive justice is already a reality in some countries (like the United States). In Europe, however, the potential risks of using AI to apply mathematical models to the law—risks that include discrimination and excessively stiff penalties—historically have hindered its implementation. Recently, though, there is greater awareness of the pros and cons, and things have started to change. Utilization of AI tools appears to be gradually becoming a reality in the Italian legal landscape. Indeed, courts and academic institutions are performing trial runs with AI, for instance creating databases of decisions and training AI systems to render fair decisions based on previous case law.

Appropriate tools that would allow predictive justice to be used in real-life tribunals and courts to resolve disputes have yet to be developed. In parallel with that technological development, appropriate legal safeguards must be put in place that will mitigate the consequences of predictive justice. The European Proposed Regulation on AI looks at predictive justice AI and classifies it as high-risk AI (see below). As a result, when predictive justice AI eventually is implemented, it will have to comply with strict obligations provided in that Regulation.

How predictive justice works

The term predictive justice refers to a fairly new way of employing AI to predict judicial decisions through algorithms. The advantage of these “robot judges” is that they can reach fair and more predictable decisions more quickly than human judges, avoid human error, avoid inconsistency in judgement, and guarantee legal certainty. Of course, for predictive justice to be implemented, appropriate safeguards would be needed to avoid violations of human rights. To accept the concept of predictive justice, we have to accept that the way judicial decisions are made can be predicted and therefore is certain and scientific.

However, predictive justice is not devoid of risks: Appendix I to the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment lists three risks. First, when mathematical models are applied to judgement content, the complexity of the legal arguments used, in turn based on subjective interpretation, cannot be fully explored; second, there is a risk of producing incorrect explanations of legal decisions based on statistical correlations that take into account just some of the elements that fed into the final decision; third, the learning models used to train the AI could actually exacerbate discrimination—using mathematical models with biased data can lead to discriminatory results. In an attempt to guide the creation of AI predictive justice tools on an ethical plane, the Council of Europe developed five principles for the Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment, which are the principles of respect for fundamental rights; of non-discrimination; of quality and security; of transparency, impartiality, and fairness; and the “under user control” principle.[1]

Around the world, attempts to create functional predictive justice tools are based on a statistical-jurisprudential approach, meaning that an AI machine is trained to predict future decisions on the basis of previous relevant case law input into the system.

Indeed, the first step in creating predictive justice AI generally consists of the construction of a suitable dataset, composed of data from the outcomes of previous judgments.

Once this dataset is created, the AI, through algorithms based on computational statistics and predictive analysis, learns to recognize data patterns and becomes increasingly capable of predicting future decisions. After a process of trial and error, the AI is finally ready to be used by both lawyers and judges.

The proposed AI Regulation and predictive justice

On April 21, 2021, the European Commission published a Proposal for the Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act).[2] According to Recital 40, “Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts.”

The classification of predictive justice systems as high risk means that they will be subject to rigorous obligations before they can be put on the market. The following requirements must be met:[3]

  • adequate risk assessment has to be carried out and a mitigation system must be in place;
  • high quality of the datasets feeding the system has to be ensured to minimize risks and discriminatory outcomes;
  • activity must be logged to ensure traceability of results,
  • detailed documentation providing all necessary information about the system and its purpose needs to be prepared for authorities to assess compliance;
  • clear and adequate information needs to be given to the user;
  • appropriate human oversight measures to minimize risk must be implemented;
  • a high level of security and accuracy needs to be ensured.

Italian experiences

In Italy, we have already witnessed computerization of the administrative process: court services, archives, brief filings, and service between the parties in a lawsuit. But, partially due to the risks outlined above, technology has not yet been applied to the judgement phase.

However, recently there has been a spate of interesting initiatives in predictive justice that also cover automation of the judgement phase.

The Scuola Superiore Sant’Anna Predictive Justice project, in collaboration with the Genoa Tribunal and the Pisa Tribunal, is one interesting example. The goal is not only to be able to predict judicial decisions precisely, but also to use explainable AI (abbreviated as XAI). Unlike “black box” AI, XAI is designed to “explain the reasoning behind each decision for different stakeholders, identify possible trends/biases and simplify legal tasks.”[4]

This is not the only such experiment. The University of Bologna has opened the AI4Justice laboratory at the Alma Mater Research Institute for Human-Centered Artificial Intelligence. The laboratory analyzes data analysis (data analytics), prediction (predictive AI), and visualization (legal design) techniques using methodologies that combine law, technology, and ethics.

The Court of Appeals of Brescia is collaborating with the University of Brescia to predict the estimated duration of a dispute in a given matter and the different jurisprudential approaches taken by various judicial authorities, starting with the Tribunal and the Court of Appeals of Brescia. The project is limited just to two fields of law: labor law and business law.

Concluding Remarks

The use of AI in the decision-making phase of a trial poses a series of ethical and legal questions that will have to be taken into account during development of predictive justice AI technology. The use of algorithms and mathematical models to predict judicial decisions would reclassify law as a science rather than an art. It would lead to more legal certainty, but less flexibility in terms of interpretation.

In Europe and in Italy, research initiatives and attempts to create fair and explainable AI for predictive justice purposes are growing in number. While the benefits of this technology would be enormous in terms of speeding up the process and making application of the law more consistent, there are still risks involved. The new predictive justice rules to be developed at the European level will provide useful guidance for those working on this issue. Moreover, the five predictive justice principles developed by the Council of Europe (respect for fundamental rights; non-discrimination; quality and security; transparency, impartiality, and fairness; “under user control”) are designed to foster predictive justice AI that is in line with fundamental human rights. AI that respects fundamental human rights is the only type of AI whose implementation is conceivable.

[1] European Commission for The Efficiency of Justice (CEPEJ), European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment, available at <https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c>.

[2] Please refer to our previous article: https://portolano.it/newsletter/portolano-cavallo-inform-litigation-arbitration/the-proposed-eu-regulation-on-ai-a-proportional-risk-based-approach-to-a-civil-liability-regime-for-artificial-intelligence.

[3] Articles 8–29 of the Proposed Regulation.

[4] https://www.predictivejurisprudence.eu/the_project/the-platform/

Indietro
Seguici su