AI and healthcare: A question of liability
Artificial intelligence (AI) systems of varying levels of sophistication are increasingly used in health care in conjunction with various medical services.
Some software is designed to assist and support healthcare professionals. This includes programs that process medical data and information to provide diagnostic suggestions and treatment recommendations, which sometimes are integrated into medical hardware devices. There are also software tools that can be used to interpret examination results, such as the ECG interpretation software that is now widely used.
There are also numerous AI-enabled platforms and applications available directly to patients. These allow patients varying degrees of autonomy in managing their own care. They are often installed on smartphones and used in connection with external devices. These applications calculate the correct dosages of medication by analyzing data and parameters, then send input to external devices (such as insulin pumps). Other popular software applications monitor patient health and send alerts if certain thresholds are exceeded with an eye to timely medical intervention.
Depending on what the manufacturer establishes as the intended use of such software—including standalone software[1]—it may qualify as a medical device under Regulation (EU) 2017/745 “MDR.”[2] However, the regulation does not provide specific guidance on liability issues. It merely assigns general liability to the manufacturer in the case of malfunction or related damage and requires manufacturers to ensure adequate financial coverage for such liability, typically in the form of insurance.
Software under the care of a professional
In some cases, AI technology allows a physician to control and validate results generated by software. The clinician directs the algorithm’s operation in real time or reviews results produced by the software in order to correct any errors before treatment is administered to the patient. When the clinician can access and trace the operational logic of the algorithm—i.e., in a situation of transparency—they can identify potential biases.
For example, ECG analysis and interpretation software trained on large datasets can detect abnormalities in test results that may elude the human eye. This allows early diagnosis and monitoring of certain cardiac conditions.
In such situations, the clinician could be considered responsible for the health care provided with the assistance of AI tools under their direct control. Any issues arising from potential patient injuries could be addressed by applying standard legal remedies and special rules, such as the medical liability provisions under Law No. 24/2017, known as the Gelli-Bianco law.
Specifically, an injured patient may file a contractual claim against an individual professional and/or the associated healthcare facility. The facility is likely to have deeper pockets. The professional remains liable in tort under Article 2043 of the Italian Civil Code.[3] A contractual claim made by the patient essentially places the burden on the defendant—whether physician or healthcare facility—to demonstrate proper conduct.
Software with a high degree of autonomy
Different considerations apply to software that operates with a high degree of autonomy, interacting directly and immediately with the patient for diagnostic or therapeutic purposes. This scenario is even more complex when it involves software whose operational logic cannot be closely traced due to its design and development.
In these cases, it must be determined whether the physician can be held liable for damages caused by the algorithm, even though they did not have the opportunity to intervene and direct or correct the software’s actions. Potential malfunctions and related patient injuries cannot be ruled out, particularly given that AI systems—especially generative models—are inherently subject to evolutions, updates, and changes that impact their operation in increasingly unpredictable ways.
The issue of liability has major implications both economically (considering that increasingly complex and autonomous software will soon enter the market) and socially (attributing liability could foster defensive medicine practices, whereby physicians avoid using AI systems they cannot fully control, even though the systems offer potential for improved outcomes).
The fact that such software is often subject to medical device regulations offers reassurance. As such, its algorithmic performance and safety must be verified through the conformity assessment procedures required by the MDR. Moreover, one could argue that software’s certification as a medical device allows the physician to rely on the device’s performance concerning safety and performance and suitability for intended use. This does not automatically exclude liability in all cases involving certified AI systems: liability ultimately depends on the specific case and the role played by the device in the health care provided to the patient.
The draft law on AI approved last April by the Council of Ministers merely stipulates, “AI systems in healthcare constitute a support mechanism in prevention, diagnosis, treatment, and therapeutic choices, leaving the decision-making authority solely to the medical profession.”[4] Nonetheless, in the case of software/medical devices, manufacturer liability plays a key role.
It remains to be seen how manufacturer liability will be enforced if a software were used under a doctor’s supervision, prescribed by a physician, or utilized in a healthcare facility. For example, a patient could still file a contractual claim against a healthcare facility, citing harm due to the facility’s poor organization and failure to ensure an adequate level of care, including the software solutions employed. However, if the harm were caused by a software malfunction or manufacturing defect that could not have been detected by the facility with ordinary diligence, the facility may be exempt from liability to the patient, with the manufacturer potentially liable under a right of recourse.
Software used independently by patients
Similar considerations apply to applications provided directly to patients by companies, such as smartphone applications or programs used on personal computers. Numerous examples include apps that monitor physiological processes and parameters, sometimes used in connection with external hardware, and applications that leverage AI to provide information and recommendations on the progression of a condition based on data and information collected by the user/patient.
If a malfunction caused harm, the user could claim the manufacturer is liable for the software and rely on the remedies in the Consumer Code[5] for product defect–related damages. However, this route may require considerable effort to prove the constitutive elements of the claim, particularly the defect and its causal link to the harm.
Software transparency and interpretability
The more software is designed and developed to allow users to interpret the results generated by an algorithm and trace the decision-making process, the more feasible it becomes for healthcare professionals to monitor its operations, validate its results, and intervene to correct any malfunctions. Understanding the software’s operational logic and interpretability of results also makes it easier to prove malfunctions in court.
It is no coincidence that the final version of the AI Act—currently awaiting formal adoption—requires providers of high-risk AI systems to design and develop algorithms that ensure sufficient transparency for interpreting output and using it appropriately (Article 13).
Under the AI Act, AI-based software classified as a medical device under sectoral regulations (i.e., the MDR) that requires validation from a notified body for market entry is considered high-risk.
The evolving European regulatory framework
Significant developments expected at the European level will impact the healthcare sector. Two draft directives are currently under discussion: one to reform product liability for defective products, designed to replace the current Directive 85/374/EEC in a way more appropriate to the digital era, and the other to introduce a unified framework for non-contractual liability for AI-related damages.
The intention behind the product liability directive now under review by the European Parliament is to update the current, outdated framework to suit the digital era. Noteworthy items include updating the definition of a product to include software (except for open-source software) explicitly.[6] Additionally, the directive introduces provisions to ease the burden of proof for the injured party by (i) granting the court the authority, upon request of an injured party who has “submitted sufficient facts and evidence to support the plausibility of the claim,” to order the defendant to disclose relevant evidence; and (ii) introducing certain presumptions concerning the product’s defective nature if certain conditions are met.[7]
For the proposed directive on non-contractual liability for damages caused by AI, the European legislature aims to harmonize the evidentiary requirements across different Member States by introducing (i) disclosure obligations for the defendant in actions initiated by the injured party, similar to the proposal on product liability, but applicable only to high-risk AI systems;[8] and (ii) presumptions on the burden of proof for the injured party in compensation claims, particularly concerning the causation between the negligent conduct and the damage caused by the AI system, subject to specific conditions.[9]
Thus, the regulatory framework—already complex due to the multiple liability regimes mentioned above—will be further complicated by introducing two additional directives. The legislature justifies this multi-faceted approach as it says it is intended to ensure that the user can seek recourse across the entire supply chain. In the medical field, this chain is particularly intricate. In addition to the patient and the medical practitioner or healthcare facility, the manufacturer, the software developer, the algorithm programmer, and so forth may be involved.
Therefore, while a single, unified, and coordinated text might have been preferable as it would have simplified a complex system, it is also true that the potential scope of AI application is vast. Consequently, at least at this stage, it is not surprising that the decision was made to implement measures on multiple fronts.
[1] Software that operates independently of any hardware support.
[2] When AI algorithms are used in health care, the related software often qualifies as a standalone medical device with a specific medical purpose, in line with the guidelines of the Medical Device Coordination Group and the European Commission. Sometimes, however, the software is nonetheless subject to medical device regulations, albeit to a different extent—it may qualify as an accessory to a medical device under the MDR. These considerations also apply to digital therapeutics, i.e., therapies designed to prevent or treat diseases or other pathological conditions (such as obesity, anxiety, or depression) by modifying the patient’s behavior. Such pieces of software also fall squarely into the category of medical devices, despite functional mechanisms that distinguish them from more traditional devices.
[3] Pursuant to Law No. 24/2017, Art. 7, “A public or private healthcare or social care facility that, in the fulfillment of its obligations, employs healthcare professionals—even if selected by the patient and not employees of the facility—shall be liable, pursuant to Articles 1218 and 1228 of the Civil Code, for their intentional or negligent conduct,” while the healthcare professional “is liable for their actions pursuant to Article 2043 of the Civil Code, unless they acted in the performance of a contractual obligation assumed with the patient.”
[4] Draft Law “Principles in the Field of Artificial Intelligence,” the text of which was approved at the Council of Ministers meeting on April 23, 2024.
[5] Legislative Decree No. 206/2005.
[6] This modification is not expected to have significant impact at the national level, given that the Consumer Code already provides a broad definition of product that includes software.
[7] Article 10 of the proposed directive on liability for damage caused by defective products, repealing Council Directive 85/374/EEC.
[8] The definition of a high-risk AI system is provided in Article 6 of the AI Act .
[9] Article 4 of the proposed directive on adapting non-contractual liability rules to AI.