Besides providing a common definition of AI,[1] the proposed regulation establishes general rules based on the level of risk deriving from each specific AI application.
And the EU is going beyond the AI Act. The European legislator is simultaneously working on two directives (the AI Liability Directive and the Revised Product Liability Directive, as defined below) that seek to ensure more effective protection against high-risk AI systems by easing the burden of proof for claimants and requiring disclosure of evidence from AI providers.
Additional regulations designed to govern broader matters (e.g., copyright, data protection, general product liability, medical devices) will certainly affect AI development, as it is to be expected in light of the across-the-board nature of AI.
This article is intended to clarify this dense legislative framework, though most of the legislation is still in the rough-draft stage.
In the spotlight: The AI Act
First proposed by the European Commission in April 2021, the AI Act establishes obligations for providers of AI systems and AI-enabled products using a risk-based approach:
- Those presenting unacceptable risk to people’s safety and fundamental rights would be mostly banned;
- A wide range of high-risk AI systems used in critical areas (such as education, employment, law enforcement, and justice) would be authorized but subject to a set of requirements and obligations (e.g., conformity assessment);
- Solutions that pose a limited risk would be subject to lower transparency requirements.
In addition, following the recent uproar over generative AI systems, the European Parliament supplemented the draft with a tiered approach for AI models that do not have a specific purpose (known as general purpose AI) with a stricter regime for foundation models—large language models on which other AI systems can be built. Foundation models would have to guarantee robust protection of fundamental rights. Generative AI systems like GPT would then have to comply with additional transparency requirements, such as disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing information on the use of training data protected under copyright law.
Now that the EU Parliament’s plenary session has adopted its negotiating position (on June 14, 2023), the AI Act entered the final phase of the trilogue. The goal is to reach an agreement by the end of 2023.
Best supporting actor: The AI Liability Directive
The draft AI Act contains no provisions on liability and compensation for damages suffered due to AI systems being placed on the market. , the Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence, submitted by the European Commission on September 28, 2022 (the “AI Liability Directive”), establishes a fault-based liability regime and lays down uniform requirements for consumer claims for damages caused by AI systems.
Importantly, it introduces a “rebuttable presumption of causality” that would lower the burden of proof placed upon a claimant when a duty of care is infringed and it is “reasonably likely,” based on the circumstances of the case, that this infringement influenced the output produced by the AI system that gave rise to the damage.
The current draft has been assigned to the Legal Affairs Committee of the Parliament (JURI). The next step for the directive is for the European Parliament and Council to study and adopt the draft text.
Time for a revival: The Revised Product Liability Directive
In addition to the two AI-targeted law proposals described above, there is another EU legislative draft that could impact the AI regime for the decades to come (and offer a foothold for very high-profile lawsuits). This is the proposal for a new Directive on Liability for Defective Products (the “Revised Product Liability Directive” or “RPLD”), which was issued along with the AI Liability Directive.
The RPLD seeks to repeal and replace Directive 85/374/EEC (the “Product Liability Directive” or “PLD”) with an updated framework better suited to the digital economy. The PLD provides a strict liability regime whereby any consumer who suffers material damage from a defective product can seek compensation from the manufacturer. Now, the RPLD proposal aims to modernize that directive by taking into account software and digital products, including AI products.
While applying the PLD traditional strict liability rules, the RPLD proposal goes a step further toward a stricter regime for manufacturers/developers of defective AI products. Article 9 of the draft provides that if a product’s technical or scientific complexity makes it excessively difficult for claimants to prove its defectiveness or the causal link between the defect and the damage, then such defectiveness or causation can be presumed.
On June 14, 2023, the Council of the EU adopted its negotiating position, which enables it to engage in talks with the European Parliament—once that body has adopted its own position—and with the European Commission to settle on the final legal text.
Offstage: Pre-existing laws and their impact on the development of the AI landscape
The regulatory landscape described above may already appear cumbersome to any operator planning to invest in the AI sector, yet it comes in addition to a set of pre-existing laws that have already wielded major impact on the technology sector.
For example, the 2019 Digital Single Market Directive (the “DSM Directive”) introduced AI and machine learning to the copyright world . Articles 3 and 4 provide an innovative copyright exception for text and data mining (the “TDM exception”). This exception allows researchers at academic and cultural institutions to use all lawfully accessible works to train machine learning applications. Everyone else (including commercial machine learning developers) can use works that are lawfully accessible, unless their use for text and data mining purposes has been “expressly reserved [i.e., opted-out] by their rightsholders in an appropriate manner, such as machine-readable means.”
Some have argued that the draft AI Act, combined with the TDM exception, fails to protect human creators adequately. Though a revision of the TDM exception is not yet planned, many stakeholders in the creative industries have raised doubts about the clarity and enforceability of the opt-out processes envisaged by such provisions. As a result, the protection of rightsholders may well be left to streaming platforms (as was recently the case with Spotify).
The regulation of AI will also have to take into account the GDPR, the Data Act Proposal, and any other data protection laws. The EDPS and the EPDB, in the Joint Opinion 5/2021 on the first draft of the AI Act proposal, welcomed the “risk-based approach of the AI Act,” while underlining its “prominently important data protection implications.” However, since the Joint Opinion dates back to the very first draft of the AI Act, further statements can be expected on the draft proposal that has now entered the trilogue stage.
The list goes on. Online service providers that implement AI solutions as part of their services will have to coordinate the new AI rules with those recently established in the Digital Services Act. Compliance in the sector of medical devices will be affected by the overlap between the AI Act and Regulations (EU) 2017/745 and 746. Products with embedded AI solutions will have to comply with the new Machinery Regulation, which was published on the Official Journal of the EU on June 29, 2023, and will be applicable starting January 2027.
These are just a few examples of the potential interferences that may alter existing governance and compliance mechanisms. Careful and consistent legal assessment will therefore be imperative to ensure the legal certainty the AI sector needs to invest in scalable and future-proof business models. To this end, our newsletter will follow up with sector-specific articles designed to untangle issues related to the interplay between AI regulation and existing laws.
[1] Recital 6: “A machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”