The AI Act: What will the impact be on the medical device industry?
On June 14 of this year, the European Parliament approved, with amendments, a proposal for a regulation that provides a uniform legal framework for the use of artificial intelligence in the European Union, known as the “AI Act.” The approval process is now entering its final phase, which will involve the EU Council and the Commission, with the aim of drafting final text by the end of the year. According to the proposal, the regulation will become applicable 24 months after its publication in the Official Journal of the European Union.
This is the first ambitious global attempt to regulate AI systems. Underlying the proposal is the realization that these technologies are capable of delivering a wide range of economic, social, and environmental benefits in many areas of human activity by enabling improved forecasting, optimization of operations and resource allocation, and customized service delivery. However, their use may also raise new risks for citizens and society, since these systems (due to their opacity and lack of transparency, their complexity, and their capacity for autonomous behavior) are potentially capable of negatively affecting numerous fundamental rights recognized within the European Union, from respect for private life and the protection of personal data to the protection of human dignity.
The regulation provides a unified framework for AI systems, from development through marketing and use, based on a proportionate risk-based approach.
There is an absolute ban (Art. 5) on placing certain AI systems on the market or putting them into service or use. These include systems that employ subliminal techniques or exploit human vulnerabilities to distort behavior in such a way as to cause physical or psychological harm, and remote real-time biometric identification systems in publicly accessible spaces used for law enforcement purposes, with limited exceptions.
Specific obligations are provided for AI systems classifiable as high-risk (Art. 6 ff.). These include the implementation of a risk management system, transparency obligations toward users, and conformity assessment procedures for CE marking that may be handled through third-party bodies.
AI Act and medical devices
The draft regulation defines an artificial intelligence system as “software developed using one or more of the techniques and approaches listed in Annex I [including machine learning approaches, logic, and knowledge-based approaches] that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions that influence the environments with which they interact.”
Under this definition, many software products classified as medical devices, even those already on the market, will be considered items based on artificial intelligence systems for the purposes of the AI Act and therefore will have to comply with its requirements. Some examples are devices that use AI to assist doctors in diagnosing diseases and those that provide personalized treatment options on the basis of data and information collected about individual patients.
Consequently, such devices will be subject to the AI Act and the specific obligations thereunder.
Classification for medical devices using AI systems
The proposed regulation classifies as high-risk an AI system that meets both of the following conditions:
- The system is intended for use as a safety component of a product, or is itself a product, covered by the EU harmonization legislation listed in Annex II.
- The product incorporates an AI system safety component or is itself an AI system and therefore is subject to a third-party compliance assessment before being placed on the market or put into service in accordance with the EU harmonization legislation listed in Annex II.
With regard to the first condition, Annex II to the AI Act expressly includes EU Medical Device Regulation 2017/745 (“MDR”) and EU In Vitro Diagnostic Devices Regulation 2017/746 (“IVDR”) among the harmonizing regulations.
With regard to the second condition, under the MDR all devices in risk class IIa or higher must undergo conformity assessment by a third party (notified body) in order to be placed on the market.
With regard to medical device software, classification rule no. 11 (Annex VIII, Chapter III) of the MDR should be consulted. This provides that software “intended to provide information used to make decisions for diagnostic or therapeutic purposes” is placed in class IIa, except in certain cases when it is designated class III (if the decisions may result in death or irreversible impairment of a person’s health) or class IIb (if the decisions may result in serious impairment of a person’s health or surgery). Software intended to monitor physiological processes is also placed in class IIa, unless it concerns vital physiological parameters whose variation may lead to immediate danger to the patient, in which case it falls into class IIb. Only medical software devices outside these cases fall into class I. This means that a large number of items of software as medical devices fall into class IIa or higher under the MDR.
Therefore, a medical device that incorporates or is itself an artificial intelligence system (i.e., software classified as a medical device) would most likely fall into the high-risk AI systems category under the AI Act.
High-risk AI systems and conformity assessment
In line with a risk-based approach, high-risk AI systems are admitted to the European market subject to compliance with certain mandatory requirements (management system, transparency obligations toward users, and so on) and an ex-ante conformity assessment.
Sectoral legislation already provides conformity assessment procedures conducted by third parties for products such as medical devices. The draft regulation requires that an AI system’s conformity with the AI Act be assessed as part of the conformity assessment conducted according to sectoral legislation, in order to minimize the burden on manufacturers and avoid possible duplication of procedures.
This means that assessments of medical devices of class IIa or higher will be conducted by the notified bodies already in charge of assessing the safety and performance of devices under the MDR. In such cases, the issuance of the EC certificate of conformity will certify compliance with both standards—the MDR and the AI Act.
Potential entanglement with data protection regulations
The AI Act is bound to intertwine not only with rules on medical devices, but also with the legislation on personal data protection, and in particular with the GDPR (EU Regulation 2016/679). This seems unavoidable in view of the immense amount of data, including personal data, that artificial intelligence systems are intended to process. There isn’t room here for a full discussion of issues related to the prerequisites and limits of the processing of such data by AI software, but do note that the need to guarantee data accuracy, and above all the quality of input data, which is one of the cardinal requirements for AI systems, could lead to new processing of personal data and generate further complexities.
Specifically, Article 10 of the AI Act provides that AI systems that use techniques involving training models with data must be developed on the basis of training, validation, and testing datasets that meet the specific quality criteria listed therein, based on, inter alia, methods of data collection, relevance, representativeness, and accuracy of the datasets, and the use of datasets that take into account the characteristics and the specific geographical, behavioral, or functional context within which the high-risk AI system is intended to operate.
Therefore, even the assessment of software compliance with the AI Act could result in the processing of personal data, and this may raise a number of questions as to the lawfulness of this processing under the GDPR, e.g., in relation to the de-identification techniques used, the security measures taken, and so on Paragraph 5 of Article 10 provides that insofar as it is strictly necessary to enable the detection and correction of bias (and if this cannot be achieved using anonymous or synthetic data), the processing of special categories of personal data under the GDPR (health data, data relating to sexual orientation, and the like) is permitted subject to certain safeguards. But what if, for example, the datasets include data of people from non-EU countries for whom the same rules do not apply? Could data transfer issues and jurisdictional conflicts arise? These are only hypotheses, but the fact remains that it is not clear how far the assessment of the accuracy of input data can or should go, especially when anonymous data are not sufficient to achieve this and to assess it.
Impact on the procedure for placing devices with AI on the market
As mentioned above, the fact that for medical devices the AI Act conformity assessment is incorporated into the MDR conformity assessment carried out by notified bodies should avoid duplication and bureaucracy, or at least that is the intention of the European legislature.
However, there are still critical aspects in play, including the capacity of notified bodies to shoulder this additional workload, and more critically their ability to assess artificial intelligence systems under the AI Act, since this will require highly specialized technical skills that these bodies likely lack at present.
Additionally, the double evaluation will undoubtedly require additional work for both companies and notified bodies. That risks translating into longer timeframes and higher costs for certification procedure, especially given the current shortage of notified bodies to cope with the recertification of all medical devices due to the transition from Directive 93/42/EEC to the MDR.
So, we a new challenge is to be added on top of an existing one. The hope is that at all levels more tools will quickly be acquired to deal with all of them, especially in light of the fact that the new legislation on civil liability for damage caused by artificial intelligence systems is already in the pipeline.
 These are biometric identification systems that capture, compare, and identify biometric data in real time or without significant delay.
 The same proposal defines “safety component of a product or system” as a component that performs a safety function for that product or system or whose failure or malfunction endangers the health and safety of persons or property.
 Certain AI systems identified in Annex III to the draft regulation are also classified as high-risk, essentially on the basis of their purpose (biometric identification and categorization of persons, management and operation of critical infrastructures, access to public services and facilities, and so on).