AI and machine learning in the medicine lifecycle: EMA publishes first draft of reflection paper
On July 19, the European Medicines Agency (“EMA”) made available on its website a first draft of its “Reflection paper on the use of AI in medicinal product lifecycle,” which contains the authority’s latest considerations on the application of artificial intelligence and machine learning methods in each phase of the medicinal product lifecycle, from drug discovery to clinical trials and post-authorization activities.
The document is part of the initiatives of the Heads of Medicines Agencies (HMA) and the EMA’s Big Data Steering Group (BDSG) to enhance the capacity of the European medicines regulatory network—national authorities, the European Commission, and the EMA itself—to exploit these technologies based on the analysis of large amounts of data and then regulate and steer the development of new products. In particular, the document aims to stimulate reflection and debate with stakeholders on the criteria for the regulatory assessment of these new technologies, identifying for each phase of the lifecycle of medicinal products the rules and good practices applicable today (starting with GxP, in addition to the rules on the protection of personal data) and promoting more specific regulation of the subject in the future. The public consultation on the draft reflection paper is open until December 31, 2023, and the topic will be discussed further during a joint HMA and EMA workshop scheduled for November 20 and 21 of this year.
AI and drugs
The use of artificial intelligence—systems capable of simulating human capabilities and behavior to achieve specific goals with a greater or lesser degree of autonomy—is increasingly being complemented by the use of machine learning (ML), meaning algorithms that are able to learn, improve their performance, and provide output by analyzing and categorizing huge amounts of data, with no need for explicit instruction during programming.
These technologies are also spreading rapidly in the pharmaceutical sector, and their use can already effectively support the acquisition, transformation, analysis, and interpretation of data throughout the lifecycle of medicinal products, as recognized by the EMA. However, the very nature of these technologies gives rise to new risks that must be properly managed. These include the use of inadequate datasets or learning parameters, which could alter the models implemented in clinical product development, or the use of non-transparent systems, which do not allow the reasons underlying a decision made by the algorithm to be traced and thus make it impossible to understand the parameters that determined it.
Risk-based approach
For these reasons, in the development, implementation, and monitoring phases of AI and ML systems used during the lifecycle of medicines, the EMA generally suggests a risk-based approach, so that developers pre-emptively and proactively establish the risks that need to be monitored and/or mitigated. The different levels of risk may depend on the technology and its failures or alterations in AI models due to the distortion of underlying data, as well as the context in which these technologies are used.
In any case, the responsibility for planning and risk management, as well as for ensuring the use of models and datasets that are fit for purpose and comply with all applicable standards, should fall squarely on the applicants or marketing authorization holders for the medicinal products concerned.
A risk-based approach, which it is to be hoped will be supported by the establishment of more specific risk assessment and management guidelines, was used as the basis for the proposal for a regulation on artificial intelligence currently under discussion at the European level. Known as the “AI Act,”[1] this is destined to find wide application in the pharmaceutical and medical fields, where AI systems are classified according to four risk levels, ranging from those that present an unacceptable risk and therefore are prohibited (harmful cognitive behavioral manipulation systems, social classification systems, and the like) to those with high, limited, or no risk. Various quality and transparency requirements are envisaged, possibly to be implemented through third-party certifying bodies.
The use of software and devices
Software and devices implementing AI and ML systems in the context of the clinical evaluation of medicines or their administration, and thus having a specific medical use, could be classified as medical devices or in vitro diagnostic medical devices under European Regulations 2017/745 and 2017/746, respectively.[2]
Without prejudice to the authorization and certification procedures provided for these devices under the industry regulations, in addition to the rules and evaluation procedures that may apply as a result of the AI Act, AI and ML technologies will have to be evaluated by the authority and bodies involved in the clinical development of drugs to verify their reliability and ability to generate output deemed sufficiently reliable to support future market introduction.
AI and product lifecycle
According to the reflection paper, the level of risk associated with the use of AI and ML systems may differ depending on the stage of a drug’s life cycle.
For example, the risks arising from the use of artificial intelligence in drug discovery activities, aimed at identifying biological elements or mechanisms (targets) capable of interacting with a given pathology targeted by the drug, are considered minor.
In the non-clinical drug development phase, the use of AI and ML cannot only allow more robust and reliable data to be generated, but makes it possible to limit or even avoid the use of laboratory animals. Given the relevance of pre-clinical data in the assessment of the risk/benefit ratio of a drug, the application of GLP (Good Laboratory Practice) to these technologies to the extent possible is recommended.
As for clinical studies, AI/ML models must follow the indications derived from GCP (Good Clinical Practice) and be subject to specific evaluation within the study. A sponsor must inform the authorities and bodies involved of their architecture and function, and trial participants must also be informed. The relevant information should be included in the study protocol.
AI and ML also have considerable potential for use in the subsequent production phases of medicinal products (here, the EMA refers first and foremost to the applicability of GMP—Good Manufacturing Practices) and in all activities following marketing authorization. Pharmacovigilance activities and the detection and management of adverse events in particular will benefit from increasingly sophisticated algorithms. For instance, the flexibility of AI and ML models and their continuous learning capabilities could enable the generation of increasingly sophisticated models for identifying and classifying adverse events. In any case, validation and control of the models used, as well as adequate documentation of their performance, remains the responsibility of the marketing authorization holder.
The EMA recommends that all stakeholders cooperate with and seek advice and clarification from the appropriate authorities, with an eye to the most serious risks, given that specific rules and guidelines for the implementation of AI and ML systems in the individual phases of drug development are missing or lacking.
Further considerations
The EMA also recommends that in the planning, development, and implementation phases of AI/ML systems:
- Transparent and traceable data acquisition methods be used, avoiding bias, and data integrity and adequate data governance systems be ensured.
- Models be developed that are as transparent as possible—and thus able to provide explanations for differing output—as well as generalizable.
- The ethical principles outlined in the Ethics guidelines for trustworthy AI[3] submitted by the High -Level Expert Group on AI (AI HLEG) to the European Commission be respected.
Unsurprisingly, these are essentially the same principles underlying the AI Act.
Next steps
The use of artificial intelligence and the technologies that orbit around it is destined to play an ever-increasing role in the entire lifecycle of medicines, with potential benefits both in terms of the efficacy and safety of treatments and in terms of the time and costs required to develop new products. The EMA document points out that some of the existing rules and good practices can also apply to such systems and guide the work of the other actors involved, but many aspects are not regulated, so the support of the regulatory authorities will be essential, at least in the early stages.
At the same time, regulatory initiatives in the sector are set to intensify, starting with the AI Act, which, once passed, will have direct impact on the medical sector and indirect impact on the pharmaceutical sector. However, the AI Act alone will not be sufficient to answer the many questions related to the use of these technologies in the drug development phases (e.g., how to authorize their use, how to assess the reliability of the results for authorization purposes, and so on). For this, we still have to wait.
In the meantime, through the reflection paper, the EMA intends to support this process of in-depth analysis and regulation, with the aim of arriving at a definitive document that can at least provide general indications for precautions and measures in the various phases of the drug development cycle. In the United States, the Food and Drug Administration recently published a discussion document on the use of AI and ML in medicine development and production, involving the various stakeholders with the same objective of stimulating dialogue and creating shared rules for the use of these new technologies in the pharmaceutical sector.
[1] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.
[2] On the subject of classifying software as a medical device, see the guidelines of the Medical Device Coordination Group MDCG 2019-11 of October 2019, available at https://health.ec.europa.eu/system/files/2020-09/md_mdcg_2019_11.
[3] https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.