European Commission has released its White Paper on Artificial Intelligence — a European approach to excellence and trust

The European Commission has published a White Paper on Artificial Intelligence, aiming to foster a European ecosystem of excellence and trust in Artificial Intelligence technologies. This document is accompanied by a Report on safety and liability implications of Artificial Intelligence, the Internet of Things and robotics. The European Commission herein promotes a European regulatory approach to address the risks associated with certain uses of this new technology.

***

On February 19, 2020, the European Commission (hereinafter, “EC”) published a White Paper aiming to foster a European ecosystem of excellence and trust in Artificial Intelligence,[1] accompanied by a Report on the safety and liability aspects of Artificial Intelligence.[2]

In the two documents, the EC acknowledges that Artificial Intelligence (hereinafter, “AI”) is a strategic technology that offers many benefits for citizens, companies, and society as a whole. Moreover, this new technology brings important efficiency and productivity gains that can strengthen the competitiveness of European industry, as well as improving the wellbeing of citizens.

At the same time, the EC highlights that due to the specific features of AI systems (among others, opacity of AI decision-making, autonomy, and data-dependency), their use ushers in many different risks alongside opportunities. In particular, AI systems may cause harm that can be “both material (safety and health of individuals, including loss of life, damage to property) and immaterial (loss of privacy, limitations to the right of freedom of expression, human dignity, discrimination for instance in access to employment), and can relate to a wide variety of risks.[3] Furthermore, the EC underlines that “AI technologies may present new safety risks for users when they are embedded in products and services.[4]

In this regard, according to the EC the European legislation concerning product safety and liability[5] remains in principle applicable to new AI systems. Additionally, regulatory adjustments are in any event needed to include provisions that explicitly cover new risks presented by AI and other emerging technologies, as “a clear safety and liability framework is particularly important when new technologies like AI, the IoT and robotics emerge, both with a view to ensure consumer protection and legal certainty for businesses.[6]

The White Paper will be subject to an open public consultation where European citizens, Member States, and relevant stakeholders (including civil society, industry, and academics) can provide their opinion on this document, in view of contributing to the development of a European approach to AI. The consultation is open until May 19, 2020.[7]

[1] European Commission, White Paper on Artificial Intelligence: a European approach to excellence and trust, released on February 19, 2020, accessible here.

[2] European Commission, Report on safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics, released on February 19, 2020, accessible here.

[3] European Commission, White Paper on Artificial Intelligence: a European approach to excellence and trust, released on February 19, 2020, p. 10.

[4] Ibid., p. 12.

[5] The European legislation for product safety comprises the General Product Safety Directive (Directive 2001/95/EC), as well as a number of sector-specific rules covering different categories of products (for example, machines, planes, and cars).

[6] European Commission, Report on safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics, released on February 19, 2020, p. 2.

[7]Public consultation is accessible here.

Indietro
Seguici su