A European approach to the regulation of Artificial Intelligence: The EC publishes its proposal for new rules on AI
On April 21, the European Commission (EC) presented a proposal for a regulation to develop a common framework for Artificial Intelligence, proposing “new rules and actions for excellence and trust in Artificial Intelligence” (the “AI Regulation”)[1].

The proposal builds on previous initiatives, such as the Ethics guidelines for trustworthy AI presented in April 2019, the White paper on Artificial Intelligence released in February 2020, and the European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence. The proposed AI Regulation will add another element to the EU’s Digital Single Market Strategy, part of the aim of turning the EU “into the global hub for trustworthy Artificial Intelligence.

The EC advocates that a common set of rules for AI available in the EU market or otherwise affecting people in the EU be designed to be human-centric, so that people can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights.

The AI Regulation will define “AI systems” and impose tailored obligations on actors along different parts of the value chain, from providers of AI systems to manufacturers, importers, distributors, and users. The regulatory approach provided in the draft is a risk-based approach, and the draft proposes a list of “unacceptable risk” systems, meaning particularly harmful AI practices that will be prohibited altogether as contravening EU values (such as toys using voice assistance to encourage dangerous behavior among minors). On the other hand, the proposed regulation identifies “high-risk” systems, i.e., systems that pose significant risks to the health and safety or fundamental rights of persons. One example of a “high-risk” AI system is the use of facial recognition in public places, but the list also includes systems deployed in the administration of justice and law enforcement.

Overall, with this proposal, the EC is aiming to define a regulatory framework that will not inhibit the development of technology, and instead will be limited to the minimum necessary requirements to address the risks and problems linked to AI without disproportionately increasing the cost of placing AI solutions on the market. This means that for non-high-risk AI systems, only very limited transparency obligations are foreseen; an example might be providing information to flag the use of an AI system when interacting with humans.

In terms of governance, the proposed rules will likely be enforced through a governance system at the Member State level built on already existing structures. Moreover, the EC supports the establishment of a cooperation mechanism at the EU level by encouraging the creation of a European Artificial Intelligence Board, which is intended to facilitate the implementation of the regulation within Member States, as well as to drive the development of standards for AI.

The entry into force of the regulation is scheduled for the second half of 2022. Once adopted, the AI Regulation will be directly applicable across the EU.

 

 

[1] This is an introduction to be followed up by more in-depth articles in later editions of our newsletter.

Back
Follow us on