AI Act enters the final phase of adoption process as EU Parliament agrees on its negotiating position

On June 14, 2023, by a large majority (499 votes in favor, 28 against, and 93 abstentions) the European Parliament adopted its negotiating position on the “Proposal for a Regulation laying down harmonized rules on artificial intelligence” (the “AI Act”).

First proposed by the European Commission in April 2021, the revised draft that underwent the plenary session’s adoption process last week reflects the most recent technological developments in the field of the AI, including widely discussed generative AI models.

The European Parliament amended the definition of “AI systems” to align it with the definition agreed upon by the Organization for Economic Co-operation and Development (OECD). According to the newly drafted definition, an AI system is “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”

The proposed rules establish obligations for providers of AI systems and AI-enabled products using a risk-based approach and identifying both technologies that pose an unacceptable risk and categories likely to have higher or lower risk levels that should therefore be regulated differently. Below is a brief recap of the main points of the AI Act, along with the main amendments introduced by the European Parliament.

Categories of AI practices based on their risks

Prohibited AI practices: The AI Act identifies a set of AI systems presenting unacceptable risk to people’s safety and fundamental rights that would be banned except in a limited number of cases. Prohibited AI practices include those enabling harmful manipulative subliminal techniques or real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (e.g., facial recognition).

In the draft approved by the plenary session, the European Parliament amended the previous list of prohibited AI practices to ban intrusive and discriminatory use of AI systems, such as real-time remote biometric identification systems in publicly accessible spaces; post remote biometric identification systems, with an exception solely for law enforcement prosecuting serious crimes and only after judicial authorization; biometric categorization systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion, political orientation); predictive policing systems (based on profiling, location, or past criminal behavior); emotion recognition systems in law enforcement, border management, workplaces, and educational institutions; and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and the right to privacy).

High risk AI: A wide range of high risk AI systems used in critical areas or use cases (such as education, employment, law enforcement, and justice) would be authorized, subject to a set of requirements and obligations (e.g., conformity assessment). The European Parliament expanded the classification of high risk use cases to include systems that could potentially harm people’s health, safety, fundamental rights, or the environment. The revised draft also adds AI systems designed to influence voters in political campaigns and recommendation systems used by VLOPs (Very Large Online Platforms, as identified in the Digital Services Act) to the high risk list.

Limited risk AI: In the draft, this category includes AI systems that generate or manipulate image, audio, and video content, such as deepfake systems. Limited risk AI should comply with transparency requirements that allow users to make informed decisions.

Transparency obligations for general purpose AI

Following the recent uproar over generative AI systems, the European Parliament introduced a tiered approach for AI models that do not have a specific purpose (known as general purpose AI) with a stricter regime for foundation models—large language models on which other AI systems can be built. Foundation models would have to guarantee robust protection of fundamental rights; health, safety, and the environment; democracy; and rule of law. They would need to assess and mitigate risks, comply with design, information, and environmental requirements, and be registered in the EU database.

Generative foundation models like GPT would have to comply with additional transparency requirements, such as disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing information on the use of training data protected under copyright law.

Supporting innovation and protecting citizens’ rights

To support AI innovation, the EU Parliament agreed that research activities and the development of free and open-source AI components would be largely exempt from compliance with AI Act rules.

The revised draft also strengthens the national authorities, as the AI Act now gives them the power to request access to both trained and training models of AI systems, including foundation models. The draft AI Act also proposes to establish an AI Office, a new EU body to support harmonized application of the AI Act. The Office would provide guidance and coordinate joint cross-border investigations.

In addition, the proposed AI Act aims to strengthen citizens’ rights to file complaints about AI systems and receive clarification about decisions on high risk AI systems that significantly impact their rights.

Next steps in the legislative process

Now that the EU Parliament’s plenary session has adopted its negotiating position, talks with the EU Council of Ministers (representing European governments) and the European Commission on the final form of the law will begin. The goal is to reach an agreement by the end of the year.

In the meantime, EU Parliament committees will continue to work on the AI Liability Directive and the Revised Product Liability Directive , which will also have great impact on AI developments in the upcoming years.

Back
Follow us on