Transparency of information and generative Artificial Intelligence: The Italian Competition Authority accepts DeepSeek commitments tackling hallucinations in AI models

With decision no. 31784 of 16 December 2025 (the “Decision”), the Autorità Garante della Concorrenza e del Mercato (“AGCM”) closed investigation no. PS12942 against Hangzhou DeepSeek Artificial Intelligence Co., Ltd. and Beijing DeepSeek Artificial Intelligence Co., Ltd. (jointly, “DeepSeek”) by accepting the commitments presented by DeepSeek regarding user information on the risks associated with so-called “hallucinations” of AI models.

AGCM contended that DeepSeek was providing insufficient information to its users about the possibility that their AI models could generate output containing inaccurate, misleading, or false information. Before the proceedings began, DeepSeek was only displaying a generic notice to its users informing them that they were accessing an AI system and that the replies provided were “for reference only”.

Notably, no Italian translation of this notice was provided, in a way that Italian users accessing DeepSeek’s large language model were not effectively informed about the reliability of the replies received by DeepSeek, thereby exposing them to the risk of relying on inaccurate or misleading information without being aware of such limitations.

The phenomenon of hallucinations in AI systems and DeepSeek’s (initial) technical response:

In the context of AI systems, “hallucinations” refers to the phenomenon where an AI system generates information that is incorrect, nonsensical, or entirely fabricated, yet presents it confidently as if it were factual. This phenomenon occurs when AI models, particularly large language models, produce outputs that are not grounded in their training data or contradict verifiable facts, often filling gaps in knowledge with plausible-sounding but false or misleading information.

Of note, hallucinations may range from subtle inaccuracies to completely invented details, including non-existent citations, fictitious events, or fabricated data, and represent a significant challenge in ensuring the reliability and trustworthiness of AI systems, especially in professional or sensitive applications.

DeepSeek stated that the phenomenon of hallucinations constitutes an objective and unavoidable challenge for all operators in the AI sector, as no method has been found to completely and definitely eradicate it: DeepSeek held that it was fulfilling all its due diligence obligations through a continuous improvement of the training data and its quality and through the implementation of technical measures such as the use of multiple datasets and Retrieval-Augmented Generation technology, which allows AI models to quickly and accurately consult diverse information sources when responding to user requests.

The commitments accepted by AGCM:

The investigation was ultimately closed by accepting the commitments presented by DeepSeek, consisting of several measures divided into four lines of action which DeepSeek will need to implement within 90 days. The commitments submitted by DeepSeek consist of the following:

  • The insertion of a permanent information banner in Italian below the prompt input box, displaying the message “Content generated by AI. Always verify the accuracy of the responses, which may contain inaccuracies”, also providing a hyperlink to DeepSeek’s terms and conditions. Furthermore, an additional banner will be implemented on the registration page.
  • DeepSeek also committed to improving its AI models by reducing the quantity of hallucinations through a three-phase strategy consisting of a pre-training phase using meticulous data filtering; a post-training phase using specialised question-answer pairs and reinforcement learning processes and an implementation phase allowing access to external information in real time and prioritising reputable and authoritative sources.
  • The insertion of an additional banner in Italian directly in the conversation with the AI-agent for queries tackling particularly sensitive topics such as legal, medical, or financial matters.
  • The translation of DeepSeek’s Terms and Conditions in Italian (including the part concerning hallucination risks), whereas such Terms and Conditions are currently only available in English.

AGCM ultimately took the view that the commitments undertaken by DeepSeek will contribute to making the information on the risk of hallucinations more transparent, intelligible and immediate for end-users. Of note, AGCM seemed to appreciate the extension of DeepSeek’s commitments to the adoption of technological measures aimed at effectively addressing the problem at its very root.

The opinion of the Italian Communications Authority:

AGCM also requested AGCom (the Italian Communications Authority) to issue an opinion under article 27(6) of the Italian Consumer Code: of note, AGCom did not issue such opinion holding that DeepSeek’s systems would meet the criteria identifying AI systems that qualify as intermediary services under the DSA (which is enforced by AGCom itself) and thus reserved the right to take action against DeepSeek in case of possible violations.

AGCM contested this classification and, among other things, observed that DeepSeek’s search function (which, according to AGCM, would trigger such classification) is merely optional for users and that DeepSeek does not offer results from different websites as a traditional search engine would, when this option is not activated. Instead, when the search function is not activated, DeepSeek only provides a text output which does not result from any additional research as the reply is prepared on the basis of DeepSeek’s current dataset.

In addition, AGCM noted that DeepSeek’s classification as an online search engine in any case would not automatically lead to classification as an intermediary service under the DSA, as DeepSeek being a search engine would certainly satisfy a necessary, but not sufficient, condition for being classified as an intermediary service.

Finally, by citing art. 2 and recital n° 10 of the DSA, AGCM stressed that the DSA itself does not prejudice the acquis in the field of consumer protection and that Article 27(1-bis) of the Italian Consumer Code exclusively affords AGCM with the enforcement of all conducts giving rise to unfair commercial practices.

AGCM’s approach on transparency towards consumers in relation to AI models:

Of course, the Decision rendered by AGCM does not directly impose obligations on other providers of AI models, however it still provides a clear input on the issue of transparency towards consumers in relation to AI models, especially in relation to the inherent limitations of these technologies.

It is clear from the Decision that, in AGCM’s view, the adoption of permanent information banners – that is banners that are visible at all times during use of the service and presented in the user’s own language – may constitute an effective form of communication.

The Decision also highlights that merely informative measures can be usefully supplemented by technological interventions aimed at mitigating the phenomenon of hallucinations at its root: such an integrated approach, combining information transparency and performance improvement, has been favourably evaluated by AGCM in issuing its Decision.

Companies distributing AI models should monitor further developments on the topic.

Indietro
Seguici su