Artificial Intelligence in healthcare between opportunities and new challenges

The Italian version of this article is available on AgendaDigitale.eu.

Artificial Intelligence (AI) is becoming an increasingly important protagonist in the healthcare sector, undoubtedly gaining its place among tomorrow’s main challenges. It is enough to think about the huge potential of AI based software as a diagnosis support and a pathology treatment tool. A tool that can base its functioning on amounts of precedents and knowledge that is beyond the capabilities of human intelligence, and which increases day by day, by learning from experience and constantly perfecting itself.

What are the challenges arising from using these tools? Ethical issues aside, although widely relevant in this field, there are plentiful and tangible regulatory and legal issues that must be faced, such as the arrangement of appropriate procedures to guarantee the safety and reliability of these tools and the definition of liabilities in cases of malfunction or “errors”, since the definition of “error” or malfunction can itself be considered critical. Not to mention the delicate aspect of patients’ personal data protection, since these data are the fuel that allows the functioning of such systems.

Regulatory challenges

From a regulatory point of view, some of the issues arising in relation to AI based software are common to those already raised by any software/app used in the medical field: primarily, the criteria for the classification of a software as a medical device, and for demonstrating its safety and reliability.

It seems evident that technologies based on artificial intelligence and automatic learning systems which are intended to be used for one or more medical purposes (namely, to treat, diagnose, heal, mitigate or prevent pathologies), fall within the category of Software as a Medical Device (SaMD). Specifically, an SaMD is a software that is intended to be used for one or more medical purposes and that performs these purposes without being part of a hardware medical device, according to the definition provided by the International medical device regulators’ Forum.

Once the classification of Software as a medical device is defined, there needs to be an evaluation of its risk level for patients, in order to understand what kind of studies and analyses must be carried out to demonstrate its safety and reliability, and what approvals are required before the software can be placed on the market. These aspects, as already mentioned, are features common to other kinds of software.

AI based SaMDs, however, differ from all other SaMDs due to their ability to constantly learn from the outside real world and, consequently, to improve their performance. The fact that these SaMDs are “mobile” evolving tools, whose capabilities change and improve with time, raises wholly new issues. Indeed, due to this innate evolving ability, it is possible that a software differs from that which was initially approved and thus it may need new evaluations after a certain length of time.

Privacy issues

As already mentioned, the functioning of these tools is based on the possibility of using immense and ever-growing amounts of data. Such a circumstance makes eminently critical all of the issues regarding ways of gathering, storing and disclosing the said data, in compliance with the needs of patients’ personal data protection.

This peculiarity also makes the topic of the safety of these devices especially sensitive, not only in the more traditional sense, but also in relation to their aptitude to resist possible cyber-attacks, adequately protecting all of the data that they store, as well as – in certain cases – the lives of the patients using them.

Regulatory attempts in the USA

On April 2nd, 2019, the Food and Drug Administration (FDA) published a discussion paper describing a proposal for a regulatory framework relating to modifications that are implemented in AI based medical devices already on the market, in view of the drafting of guidelines on the matter (Proposed regulatory framework for modifications to artificial intelligence/machine learning-based software as a medical device).

The discussion paper sets out a way of managing the most peculiar aspect of AI based software, that is, their “mobility”, as already mentioned. Specifically, this document proposes that the manufacturer prearranges, for the purpose of placing the device on the market, a predetermined change control plan that includes prospective changes caused by the software constantly updating and retraining itself, and that specifies the method to be used in order to accomplish the said changes in a controlled way, so as to manage patient risks (Algorithm change protocol).

Possible changes due to the software adapting to stimuli from the outside world are therefore considered to be modifications to the medical device. If the changes should fall within the scope of changes already approved in the control plan, the manufacturer would only be required to document the change made. Otherwise, manufacturers should file a new request for placement on the market to the FDA.

It will be interesting to see whether such proposals will translate into laws or guidelines, and then to evaluate their effectiveness in practice.

What about Europe?

In Europe, although the debate on AI is rather lively, and many discussion papers have already been published, at present there are no meaningful regulatory initiatives that aim to regulate the use of AI devices in the medical field.

Within the next months, the new European regulations on medical devices will become applicable, replacing the national laws that were applicable previously. Such regulations provide much more accurate rules in relation to software that is classified as a medical device, their differentiation, based on patient risk level and design requirements with which manufacturers must comply, in order to take into account the ways SaMD will be used and the context in which purchasers will operate. Neither of these documents, however, addresses the peculiarities of AI based software.

From one side of the Ocean to another, we assist in a phase of cultural maturation and a gradual increase in the understanding of the phenomenon of artificial intelligence, but a methodical regulation of its substantial manifestations, that keep being based on existing norms, is lacking and is often inadequate in providing the certainty needed for the operators in the sector. The risk is that this regulatory paralysis may result in a considerable slowing down of the development of the huge potential of these new tools, especially in such a critical sector as that of healthcare.

Back
Follow us on