July 1, 2021

Artificial intelligence and medical devices: The potential impact of the proposed Artificial Intelligence Regulation on medical device software

This article has been also published on the EACCNY website on September 16, 2021.

The recent proposal for a regulation on artificial intelligence (the Artificial Intelligence Regulation) might add a new piece to the regulatory puzzle governing medical device software and other medical devices that incorporate software based on AI algorithms.

The proposal for a regulation laying down harmonized rules on artificial intelligence, adopted by the European Commission on April 21, 2021 (COM (2021/206), intersects with other regulatory frameworks, such as Regulations (EU) 2017/745 and 746 (the former became fully applicable only last month, on May 26, 2021, while the latter is expected to become applicable on May 26, 2022).

What is the relationship between the two regulations?

The recent proposal aims to provide a horizontal system of rules to regulate artificial intelligence transversally, i.e., through interaction with the various sector-specific regulatory frameworks already governing products that consist of or incorporate AI systems.

The proposal adopts a risk-based approach in that it is not aimed primarily at regulating AI as a technology, and instead focuses on the main risk scenarios. Most of the requirements prescribed by the proposal concern only high-risk AI systems, i.e., technologies that generate high levels of risk to health, safety, or fundamental rights.

There is no doubt that AI systems incorporated into medical devices (and AI systems that are themselves medical devices) can be classified as high-risk systems under Art. 6, par. 1 of the proposed regulation. Pursuant to this provision, an AI system would be considered high-risk when both of the following conditions are met: (a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the European Union harmonization legislation listed in Annex II; (b) the product in which the AI system is a safety component, or the AI system itself as product, is required to undergo a third-party conformity assessment with a view to placing it on the market or putting it into service of the related product pursuant to the European Union harmonization legislation listed in Annex II.

In relation to medical devices, the third-party conformity assessment is that performed by notified bodies for the purpose of granting the EC mark, necessary to place medical devices on the EU market. Therefore, Art. 6, par. 1 seemingly excludes AI systems incorporated into medical devices (or AI systems that are themselves medical devices) that fall into class I of Regulation (EU) 2017/745 (and hence do not require the intervention of a notified body to obtain the CE marking) from the category of high-risk systems and thus from the scope of the proposed Regulation. According to Regulation (EU) 2017/745 (Rule 11), however, most of the software previously labeled as class I is categorized under a higher risk class, subject to the intervention of notified bodies. Therefore, most of AI-based medical devices will likely fall under the definition of high-risk systems set out in the proposal.

What is going to change in the assessment of AI-based medical devices?

The explanatory memorandum that accompanies the proposal specifies that, as concerns high risk AI systems that are intended to be used as safety components of a product, the Regulation is going to be supplemented with sector-specific safety legislation in order to guarantee consistency, prevent duplication, and reduce additional burdens on businesses.

As far as medical devices are concerned, respect for the requirements in the proposal for AI systems is to be verified through conformity assessment procedures set forth in the sector’s regulatory framework. Such procedures should cover both the AI system safety risk assessment prescribed by the proposal and the generic risk assessment required by medical device regulations designed to guarantee the end product’s overall safety.

In other words, AI-based medical devices are still going to be subject to the same pre- and postmarketing assessment mechanisms. The main difference is that such mechanisms will have to guarantee compliance not only with the requirements set by the regulatory framework on medical devices, but also with those prescribed by the Artificial Intelligence Regulation. Thus, for example, for postmarketing surveillance the device manufacturer will have to collect and evaluate proactively data on experience using AI systems on the market, with the aim of identifying any corrective or preventive measures that need to be implemented immediately.

What role will notified bodies play?

This begs a question or two regarding the qualification of the notified bodies assigned to assess the conformity of medical devices. If the objective is to integrate AI safety assessment into a comprehensive premarketing assessment of medical devices conducted by notified bodies, such bodies will have to be accredited under both regulatory frameworks (AI and medical devices) to be able to conduct the dual assessments needed. Otherwise, manufacturers will have to contact two different bodies for certification of their products, which likely would lead to increased costs and longer timeframes.

How else will the Artificial Intelligence Regulation impact businesses operating in the sector?

Upon reading the proposal, it is immediately clear that it was crafted with the intention of avoiding any duplication of burdens and procedures. It is not, however, clear how that objective will be reached in practice, given that compliance with both regulatory frameworks, which do overlap somewhat, is required.

For example, according to the proposal, manufacturers of AI-based medical devices classified as high-risk systems will be subject to a series of requirements similar to those prescribed by the regulations on medical devices, such as establishing a risk management system (Article 9) and appropriate data governance and management practices (Article 10), drawing up technical documentation, ensuring an appropriate type and degree of transparency (Article 13) and IT security, and so on.

It is, therefore, crucial to understand how such documentation will have to be prepared to guarantee compliance with both regulatory frameworks and prevent unnecessary red tape that would not bring any added value to the pursuit of product safety.

Institutional cooperation for the consistency and coordination of regulatory frameworks

The above is our initial comment to the proposal, and our reasoning here is based on the proposal as it is formulated today. We are aware that it may well be amended, even significantly so, before the regulation is passed. In any case, at a time of great regulatory turmoil in the sector of medical devices, when application of Regulation (EU) 2017/745 has not yet been consolidated, further regulatory developments of this size may increase uncertainty and destabilize businesses in the sector, which are always in search of a clear regulatory framework for the marketing of their products.

Close cooperation between technical/scientific bodies at the EU and Member State levels is thus particularly desirable in order to encourage adoption of appropriate amendments, both to the proposal and to the future implementing regulations—meaning amendments designed to provide detailed technical instructions for the implementation of legislative acts (including both the AI regulation and regulations for medical devices). The objective is to ensure that the gears keep turning, and that the two regulatory frameworks are consistent and well-coordinated, so as to support the development of an increasingly vibrant and promising sector.

< Back to blog
Welcome to the Portolano Cavallo Life Sciences blog focusing on legal development and key legal issues affecting the life sciences and healthcare industry.
...
Read more
Our highly-ranked team of professionals will provide news, insights and multidisciplinary commentary on the hottest and most recent regulatory, transactional and contentious aspects of the pharmaceutical, bio-tech, med-tech, food supplement and healthcare world with an eye on its digital transformation and technological developments.

This blog will be a place for focusing on digital health, telemedicine and artificial intelligence, as well as more traditional topics: from the protection of intellectual properties to performance of clinical trials, from the market access to advertising and competition issues, from internal and criminal investigations to M&A and venture capital transactions.

Close
October 6, 2023
CBD products: the Administrative Court suspended until October 24 the recent Decree of the Italian Ministry of Health listing cannabidiol for oral use among narcotic drugs, due to the lack o...
October 4, 2023
The Guidelines for regulating contractual relations between universities and research institutes and private sponsors were adopted by the relevant Italian Ministries following the amendment ...
September 21, 2023
CBS products: from September 20th, compositions for oral administration of cannabidiol obtained from Cannabis sativa extracts shall be considered as narcotic drugs in Italy, as they have bee...
July 27, 2023
Payback on medical devices: Italian government announces extension of payment deadline to October 30, 2023
July 21, 2023
On July 21, 2023, the Italian Ministry of Health published new guidelines on health advertising of self-medication drugs (OTC) and non-prescription drugs (SOP), including advertising on new ...
Search by...
Search
Follow us on
Follow us on