Artificial intelligence and health care: News and issues in light of guidance from the Italian Data Protection Authority and the WHO
The use of artificial intelligence (“AI”), especially in the healthcare field, raises numerous issues. With the European legislature pushing through a Proposal for a Regulation (EU) on Artificial Intelligence (“AI Act”), the topic is the subject of much debate.
AI is promising and could facilitate improvement of treatment and prevention and also result in more effective allocation of resources in the field.
However, there is trepidation about the consequences of possible errors and incorrect use of AI systems. In order to stop these fears from bringing development of the technology to a halt without unduly compromising the rights and freedoms of individuals, the European legislature has come up with principles and rules to guide AI development at the EU level. It does so in part by classifying AI systems used in the health sector as “high risk,” thus subjecting them to stringent obligations.
Two recent initiatives from the Italian Data Protection Authority, the Garante per la protezione dei dati personali (“Garante”), and the World Health Organization (“WHO”) not only demonstrate the intention to foster regulated development of AI, but also testify to the existence of common principles that can be used as a foundation for new regulations in this area.
These principles focus on anthropocentric and ethical AI, which is reliable and ensures protection of humans and their rights under all circumstances.
1. The use of AI systems in the NHS: the Garante’s guidelines
On October 12, 2023, the Garante published guidelines enshrining rules and principles for the use of AI in the National Health Service and setting forth the principles governing the processing of personal data under Regulation (EU) 679/2016 (“GDPR”) when implementing such systems.
The Garante stresses the importance of accountability in conjunction with ethical use of AI systems and the deontological duties of health professionals. For example, recommendations developed by professional bodies and ethics committees are key.
1.1 Legal bases and the roles of the parties
The legal basis for processing must be identified in accordance with the provisions of Article 22 of the GDPR, which dictates rules applicable to automated decision-making, and the rules provided in the national data protection legislation for this sector.
Article 22 states that automated decisions, including profiling, may not be based on processing health-related data, except when the data subject has given consent or the data are processed for reasons of public interest, according to Member State law. Article 2(6) of the Italian Data Protection Code stipulates that data relating to health may be processed when provided by a legal provision of the European Union or national law. As a result of amendments introduced in 2021, regulations and general administrative acts may also be considered as such, insofar as they specify the types of data that may be processed, the operations that may be carried out, the underlying public interest, and the specific and appropriate measures for protection of the fundamental rights and freedoms of individuals.
In addition, the controller must ensure that any third party to whom data are disclosed employs the legal bases described above. To this end, the roles of the parties must be adequately identified, with particular attention paid to both legal obligations and activities performed by the parties involved in the processing.
1.2 Privacy-by-design and privacy-by-default
The principle of privacy-by-design requires appropriate technical and organizational measures to ensure that processing is proportionate to the public interest pursued and to ensure the integrity and confidentiality of data, protecting them from unauthorized or unlawful processing, accidental loss, destruction, or damage. Risks must be assessed in light of the characteristics of databases and analysis models.
Potential biases arising from the use of machine learning techniques must be considered. Such risks, related to the quality of the data used to train AI systems and the logic applied by algorithmic decisions, can be mitigated by performing a Data Protection Impact Assessment (“DPIA”), as well as by providing meaningful information about the algorithmic logic used to generate data and deliver services through AI systems.
1.3 Basic principles in the use of algorithms: Case law of the Council of State
The Garante outlined three basic principles to be applied to the use of algorithms in areas of public interest, based on guidance stemming from recent Council of State case law:
1. Transparency, which refers to the right of data subjects to be aware of the existence of automated decision-making tools and to be informed about the underlying rationale.
The Garante has set forth a number of measures to ensure the transparency of these systems:
- Ensure that the legal basis for processing is clear, predictable, and knowable by the data subjects;
- Consult with stakeholders and data subjects in carrying out a DPIA;
- Publish, possibly partially, the results of the DPIA;
- Provide data subjects with clear, simple, and concise information in accordance with Articles 13 and 14 GDPR;
- Provide data subjects with additional information, such as the stage at which the processing is carried out (e., training or application stage), whether healthcare professionals are required to use AI-based systems, and the diagnostic and therapeutic benefits;
- Ensure that data processing tools used for therapeutic purposes are employed only at the request of healthcare professionals;
- Regulate the liability of healthcare professionals regarding their choice to rely on AI systems to process their patients’ health data.
According to Council of State case law, an algorithm is “knowable” when awareness of the creators, the process used to develop the algorithm, the decision-making mechanism, the priorities assigned during the evaluation procedure, and the data considered relevant can be guaranteed.
The Supreme Court recently expressed its opinion on this issue. In its (admittedly controversial) ruling, it reiterated that in order for consent given for the processing of personal data by an algorithm to be lawful, the way the algorithm works must be described to the user in a clear and detailed manner.
All these elements are indispensable in ensuring that the person concerned has a real opportunity to object to the decision.
2. Principle of non-exclusivity, aimed at ensuring that a natural person always has control over automated decisions (i.e., human in the loop), especially in the algorithm training phase.
3. Principle of algorithmic nondiscrimination, which indicates the need to use only reliable AI systems and to check their accuracy periodically to mitigate the risk of errors and discrimination. This principle is closely related to the reliability of AI systems, which depends on the quality of data—which must be correct and up-to-date—and the parameters for operation of the algorithm.
1.4 The DPIA
Finally, the Garante highlighted the importance of the DPIA as a tool for both assessing the level of risk to the rights and freedoms of data subjects and establishing appropriate measures to mitigate that risk.
The DPIA should be drafted with due consideration of risks related to the processing of large-scale health data (as it relates to the entire national population), as well as to the use of algorithms used to identify general behavioral trends. The Garante also emphasized that risks related to profiling activities aimed at making automated decisions that have an impact on the health status of individuals must be taken into consideration when conducting a DPIA .
2. World Health Organization (“WHO”) guidelines
A few days after the Garante published its guidelines, the WHO released guidelines for establishing key principles to be applied to AI in the healthcare sector. Its positions are similar to those of the Garante.
The purpose is to foster development of reliable systems, taking into consideration benefits to be gained from their use—meaning improving clinical trials, diagnoses and treatments, and a broad range of personal care.
In general, the WHO cites the need to ensure compliance with privacy and data protection beginning with the design of AI systems. It also focuses on the importance of ensuring risk management throughout the product lifecycle.
The principle of transparency is a focus. This is the obligation to document both the purpose and development of the process and the way the system is used. In particular, the metrics and databases used (which should be sufficiently representative) and the reference standards and any changes made in the course of the processing should be acknowledged. The WHO also deems the quality of the data central. High-quality data is essential to avoiding errors and bias in the results produced by the systems.
The WHO and the Garante’s guidelines undoubtedly represent an important step in the evolution of AI and also serve as references to be taken into account in the development and use of such systems and in regulatory development as well. In its conclusions, the WHO stresses the need to reduce the gap between technology development and regulation while also fostering the creation of an internationally consistent regulatory framework.
 The guidelines are available on the Garante’s website at the following link: https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9938038.
 Legislative Decree No. 196/2003.
 Council of State, sent. No. 2270/2019; sent. 8472/2019, sent. 8473/2019, sent. 881/2020; sent. 1260/2021.
 Cass. sec. I, civ., sent. No. 28358/2023.