The proposed EU regulation on AI: A proportional risk-based approach to a civil liability regime for artificial intelligence
On April 21, the European Commission published a proposal for a regulation (the “Proposal“) that lays the groundwork for addressing the risks associated with the use of artificial intelligence (“AI“).[1]

The Proposal applies to AI systems, broadly defined as systems that are “developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing environments they interact with.”

While the Proposal demonstrates that European institutions are sharply focused on innovative technologies, it also analyzes and proposes potential solutions to the challenges arising from AI applications. The Proposal provides a detailed and balanced risk-based approach to the regulation of new technologies, with particular attention paid to the liability regime of the subject involved in the whole value chain.

Soft law versus hard law

Until now, the European Union, while acknowledging that legislation is one of the tools for promoting sustainable development of artificial intelligence, has always resorted to soft-law means to do so. Compared to directives and other hard-law instruments, such soft-law instruments allow greater flexibility, which has proved compatible with the fast-paced development of such technology.

In February 2017, the European Parliament released a “Report with Recommendations to the Commission on the Civil Law Rules on Robotics,[2] urging the European Commission to formulate a directive determining the general outlines of a civil law regime for the use of AI and robotics. In June 2018, the commission appointed a group of experts (the High-Level Expert Group on Artificial Intelligence)[3] that in March 2019 published “Ethics Guidelines for Trustworthy AI.[4] These guidelines cover the development of a reliable, safe, accountable, and transparent AI system.

These initiatives were further expanded upon by the European Commission, which incorporated them into a white paper dated February 19, 2020, titled “On Artificial Intelligence – A European Approach to Excellence and Trust.[5]

More recently, in October 2020, the European Parliament released three resolutions aimed at defining the cornerstones of future AI regulation, particularly in the areas of ethics,[6] civil liability,[7] and the relationship between technology and intellectual property rights.[8] Indeed, the guidance offered by the European Parliament in relation to the civil liability regime also served as a starting point for drafting the provisions contained in the Proposal and designed to establish the obligations of providers and other parties involved in the cycle of production, distribution, and use of AI systems.

A European approach to risk-based definitions and regulation

With the Proposal, comprising 85 articles and nine annexes, the Commission is following up on those soft-law instruments, marking a key milestone in the ambitious European Digital Strategy.

The proposal takes an important step in filling in the blanks in the white paper, which anticipated the need to subject high-risk AI applications to more stringent regulation but was unclear in its definition of “high-risk.” The white paper suggested that high-risk categorization should be based on two cumulative criteria: whether the AI is deployed in an area where significant risks are expected, and whether the AI application is used in such a way that significant risks are likely to arise. Now, the Proposal clearly defines not only high-risk technologies, but also categories likely to have higher or lower risk levels that should therefore be regulated differently. These levels of risk are defined as follows:

  • Unacceptable risk: AI systems that are considered a clear threat to people’s safety, livelihoods, and rights, including AI systems or applications that manipulate human behavior to circumvent users’ free will (e.g., toys that use voice assistance to encourage dangerous behavior by children) and systems that enable “social scoring” by governments.
  • High-risk: AI technology used, for example, in critical infrastructure such as transportation, education or vocational training, employment, and labor management; law enforcement and administration of justice; migration, asylum, and border control management. This category also includes safety components (e.g., application of AI in robot-assisted surgery).
  • Limited risk: AI systems with specific transparency requirements (e.g., when using an AI system such as a chatbot, users should be aware that they are interacting with a machine so that they can make informed decisions to continue or step back).
  • Minimal risk: applications such as AI-enabled video games or spam filters.

The principle of proportionality is applied via balance and control measures graduated in sync with the various risk levels. In fact, while unacceptable risks will simply be banned, high-risk AI systems will be subject to strict obligations before they can be placed on the market. These measures will include proper risk assessment and ongoing monitoring of dataset quality, as well as adequate human oversight. Finally, limited-risk systems will be subject to specific transparency obligations, requiring service providers, for example, to make users aware that they are interacting with a machine so that they can make an informed decision to continue or step back.

On the other hand, minimal-risk systems will be allowed to be used freely but may still be subject to voluntary codes of conduct for non-high-risk AI, as well as to regulatory sandboxes to facilitate responsible innovation.

How the Proposal addresses the issue of civil liability for AI systems

The Proposal broadly addresses the problems posed by the development and use of AI, including concerns over liability deriving from the use of AI systems. The Commission seeks to ensure that it is consistent with and complementary to present and future initiatives of the Commission that aim to address those same problems, including the revision of sectoral product legislation (e.g., the Machinery Directive, the General Product Safety Directive) and initiatives that address liability issues related to new technologies, including AI systems. Notably, on the same day that the European Commission published the Proposal, it also published a draft of the “Regulation of the European Parliament and of the Council on machinery, designed to replace Directive 2006/42/EC of 17 May 2006 on machinery, which guarantees the free movement of machinery within the EU market and provides a high level of protection for users and other vulnerable subjects.[9]

By imposing specific obligations upon providers, distributors, importers, users, and even third parties (Articles 16 to 29), the Proposal follows and validates the approach established in the October 2020 Resolution of the European Parliament on the civil liability regime for artificial intelligence. The starting point there was the assumption that “AI-systems have neither legal personality nor human conscience.” On that occasion, the Parliament also noted that “the opacity, connectivity, and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of AI systems to specific human input or to decisions in the design” and that “in accordance with widely accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the different persons in the whole value chain who create, maintain, or control the risk associated with the AI-system liable.

In fact, Recital 53 of the Proposal clarifies, “It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system.”

Providers of AI systems must establish “appropriate data governance and management practices” and must use datasets that are “relevant, representative, free of errors, and complete.” Compliance with the rules must also be demonstrated through conformity assessments and technical documentation containing a general description of the AI system; its main elements, including validation and testing data; and information about its operation, including metrics of accuracy.

After a high-risk AI system is sold or put into use, the providers must establish a “proportionatepostmarket monitoring system to collect data on the system’s operation to ensure its “continuous compliance” with the regulation and to take corrective action if needed. Systems that continue to learn after they have been put into use need new compliance assessments if the modifications from learning are substantial. If the user of such a system modifies it substantially, then the user, not the provider, is responsible for conducting the new compliance assessment.

Moreover, high-risk AI systems must be designed to allow users to “oversee” them in order to prevent or minimize “potential risks.” Design features must enable human users to avoid overreliance on system outputs (“automation bias”) and must allow a designated human monitor to override system outputs and to use a stop button.

Organizations can be fined up to 6% of annual worldwide turnover for breaching the rules on high-risk systems. All other breaches are subject to a fine of up to 4% of annual worldwide turnover.

Importantly, in Recital No. 12 and in Article 2(5), the Proposal clarifies that the rules set forth in the regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council (as amended by the Digital Services Act).


The regulation contemplated in the Proposal is an innovative piece of legislation that will deploy its effects not only within the European Union but also in the many other countries that are likely to follow its approach, partly in light of the extraterritorial scope of the proposed provisions and their intersection with other pieces of legislation (such as the GDPR) and legislative initiatives (such as the DGA, DMA, and DSA).

With its broad scope and detailed rules on prohibited practices and obligations in relation to high-risk systems, the proposed regulation has potentially far-reaching impact across a wide range of sectors. The regulation will now be reviewed by the Council of the EU and the European Parliament, both of which may propose amendments. The adopted text of the regulation will be directly applicable across all EU Member States two years after the regulation’s entry into force.

[1] European Commission, Proposal for a Regulation on a European approach for Artificial Intelligence, 2021/0106, available at the following link:

[2] Available at the following link:

[3] The High-Level Expert Group on Artificial Intelligence (AI HLEG) is a group of 52 experts from academia and public and private sectors appointed by the EU Commission to support implementation of the European Strategy on Artificial Intelligence.

[4] Available at the following link:

[5] Available at the following link:

[6] European Parliament, Resolution on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL), available at the following link:

[7] European Parliament, Resolution on a civil liability regime for artificial intelligence, 2020/2014(INL)), available at the following link:

[8] European Parliament, Resolution on intellectual property rights for the development of artificial intelligence technologies, 2020/2015(INI), available at the following link:

[9] Available at this link:

Seguici su