This article has been published also on Law360 on January 27, 2026.
On Sept. 17, 2025, Italy became the first European Union member state to enact comprehensive artificial intelligence legislation when its Parliament approved Law No. 132/2025, or Italy’s AI law. The law took effect on Oct. 10.
This matters for a significant number of U.S. companies: More than 2,750 U.S. firms operate in Italy, employing around 400,000 peopleacross sectors such as technology, manufacturing, life sciences, financial services and professional services [1].
Furthermore, the U.S. remains Italy’s leading non-European source of investment, with U.S. Foreign Direct Investment stock rising to $29 billion in 2023, representing 13% year-on-year growth.
This means a large number of American companies are now directly subject to Italy’s AI-specific rules, criminal provisions and sectorlevel requirements. With billions in U.S. assets and operations in Italy, compliance gaps create immediate legal, financial and operational risks.
Although the new AI law must be interpreted consistently with the EU AI Act — and does not create new obligations beyond those provided by that regulation — it introduces national principles, transparency rules and sector-specific requirements that are immediately applicable.
For U.S. companies operating in Italy or serving Italian customers, the legislation creates a two-tier compliance framework: (1) EU AI Act obligations, which apply directly and phase in through 2027, and (2) Italy-specific requirements, including immediately enforceable criminal penalties, designated national authorities, sector-specific requirements and implementing decrees due by October 2026.
This article outlines Italy’s pioneering new AI law and explains what U.S. companies operating in Italy must do to navigate its immediate sector specific requirements, criminal risks and dual-track compliance with the EU AI Act.
The EU AI Act Foundation
The EU AI Act establishes a risk-based framework that prohibits certain AI practices in all 27 member states, including Italy. For U.S. companies operating in Italy, or offering AI-enabled products and services to Italian deployers and/or users, this matters because the EU AI Act sets the core obligations they must meet before even considering Italy’s additional national requirements.
The act bans certain AI practices outright, imposes detailed obligations on high-risk systems, requires transparency for AI that interacts with individuals or generates content, and creates compliance duties for general-purpose AI models — categories into which many U.S. technologies already fall.
It imposes detailed requirements on high-risk systems, requires transparency for AI interacting with individuals or generates content, and establishes obligations for generalpurpose AI.
Key prohibitions already in effect include the following:
- February 2025: Prohibition on unacceptable AI practices and AI literacy.
- August 2025: Transparency requirements, and general-purpose AI obligations.
Obligations for high-risk and product-linked AI systems will phase in through 2026 and 2027. Italy’s national law sits atop this structure and must be applied consistently with the EU regulation, meaning U.S. firms face a dual compliance burden.
Italy’s Complementary Framework
Italy’s AI law articulates national principles aligned with the EU AI Act, including protection of fundamental rights, transparency, nondiscrimination, human oversight and cybersecurity by design. The law designates Agency for Digital Italy as the notifying authority, and National Cybersecurity Agency as the market surveillance authority.
Sectoral regulators, including the Data Protection Authority, retain supervisory powers within their domains. The government must issue decrees to implement these general principles and establish sanctions and enforcement mechanisms by Oct. 10, 2026.
A notable provision with major implications for U.S. cloud and enterprise software providers instructs public procurement platforms to favor solutions ensuring localization of “strategic data” at Italian data centers and models with high security and transparency standards. However, it remains unclear which data qualifies as strategic, and how this preference reconciles with EU internal market obligations.
Sector-Specific Requirements
Although Italy’s AI law does not create obligations beyond the EU AI Act, it establishes national, sector-specific principles and transparency requirements that are now enforceable and affect many American firms.
Healthcare and Life Sciences
U.S. medical device makers, digital health platforms and biotech companies must:
- Notify patients when AI systems are used in their care;
- Ensure AI systems do not discriminate in healthcare access;
- Maintain human decision-making in clinical contexts; and
- Reverify AI systems for reliability.
Italy’s AI law also expands permissible use of personal data for public-interest health research as well as the reuse of deidentified data for medical research, subject to Data Protection Authority notification with a 30-day standstill period. These provisions could significantly affect U.S. medical research partnerships, though their precise applicability remains unclear.
Employment
Italy’s AI law prohibits AI-based discrimination in employment decisions, and U.S. companies with Italian employees must notify workers whenever AI systems are used in human resources or workplace processes. Italy will also create an observatory on AI adoption in the workplace within the Ministry of Labor.
American companies with Italian employees must ensure HR systems comply with these notification and nondiscrimination requirements.
Professional Services
For U.S. advisory and consulting firms operating in Italy — including lawyers, accountants, engineers and other professionals subject to a professional order — AI may be used only for auxiliary or support activities. Professionals must inform clients in plain language about any AI systems they use.
Minors
Minors under age 14 must have parental consent to access AI technologies. Providers must implement age-gating mechanisms. This affects any U.S. company offering AI-powered services to Italian consumers, from chatbots to ed-tech vendors.
Criminal Liability: Immediate Enforcement
Italy’s AI law establishes new criminal offenses and aggravating circumstances, including the following, which may expose U.S. executives, directors and employees to personal liability when operating in Italy:
Deepfake Offenses
The law criminalizes distribution of deepfakes, which it defines as AI-generated or altered content that can mislead people about its authenticity and cause harm.
Copyright-Extraction Offenses
Using AI to unlawfully extract copyright-protected online content is now a criminal offense, potentially affecting AI developers using web-scraped training data. Deepfake and copyright extraction offenses currently apply only to natural persons, e.g., directors, executives or employees, rather than to companies.
Aggravating Circumstances
The use of AI in committing any crime constitutes a general aggravating circumstance when it is used as a treacherous means, obstructs the normal defensive reaction of authorities or victims, and aggravates harmful consequences. AI use that misleads citizens in the exercise of political rights constitutes an aggravated offense. This could raise issues for executives of platforms enabling political content creation or dissemination.
Market Manipulation Risks and Corporate Liability
AI-enabled market manipulation triggers heightened penalties and may expose companies to corporate criminal liability under Law No. 231/2001 if committed in the company’s interest without adequate compliance systems.
Anticipated Reforms
Implementing decrees will introduce additional criminal offenses, including failures to implement adequate security measures in the production, distribution and use of AI systems when such failures endanger public safety or state security, as well as provisions extending corporate criminal liability to existing AI-related offenses. Companies should proactively assess criminal liability exposure now rather than waiting.
Civil Liability: Uncertain Landscape
As there is no EU or Italian AI-specific civil liability framework, U.S. companies will be subject to Italy’s fault-based tort system and strict product liability rules.
Under Italy’s general tort regime, plaintiffs must prove wrongful conduct or negligence, actual damage, and a causal link between the AI system and the harm. This fault-based approach could require, for instance, demonstrating that the company failed to exercise reasonable care in deploying or monitoring the AI system.
However, the black box nature of many AI systems creates unique evidentiary challenges. It can be difficult for plaintiffs to prove exactly how an AI system caused harm, but equally difficult for defendants to demonstrate they exercised due care when the system’s decisionmaking process is opaque.
Italy’s product liability regime presents a different risk profile. Under EU and Italian law, manufacturers face strict liability if their AI system is deemed a defective product, with no proof of fault required. This is particularly relevant for AI embedded in medical devices, autonomous systems or consumer products. A plaintiff needs only to prove the product was defective, damage occurred, and a causal relationship between the defect and the harm.
Practical Guidance for U.S. Companies
American businesses should take the following steps:
- Map and classify AI systems used in or serving Italy; determine risk categories; implement EU AI Act requirements, i.e., risk management, data governance, documentation, human oversight and cybersecurity;
- Align compliance programs with EU AI Act milestones through 2027;
- Address Italy-specific requirements, including age-gating, patient and employee notices, nondiscrimination safeguards, limits on AI use in intellectual professions, and data localization for public tenders;
- Update criminal risk assessments, focusing on deepfakes, copyright extraction, aggravating factor, and market abuse liability. Ensure compliance systems address risks, particularly for financial institutions and media companies;
- Prepare for general-purpose AI obligations, including technical documentation, downstream information protocols, copyright compliance policies, and training-data summaries;
- Monitor implementing decrees expected by October 2026 for sanctioning frameworks enforcement procedures, and sectoral adjustments, identify required interactions with the Agency for Digital Italy, the National Cybersecurity Agency and sectoral supervisors; and
- Build flexible frameworks, anticipating limited regulatory guidance due to resource constraints at national authorities.
Avoiding civil liability under Italy’s current regime requires a proactive approach, which might include the following kinds of steps.
Companies should establish clear accountability for AI system safety and quality, by designating responsible individuals or teams for prerelease testing, post-market surveillance, and incident response.
They should also conduct and document rigorous testing and validation before release, including functional testing and safety testing across diverse scenarios and edge cases. Companies should maintain detailed records of the state of scientific and technical knowledge at the time the AI system was put into circulation.
Under Italian and EU product liability law, the “development risk defense” allows manufacturers to avoid liability by proving that at the time of the product’s release, the defect could not have been discovered. For AI systems, this means documenting what was knowable about AI safety, bias, robustness and security at the time of release — including industry standards, academic research, regulatory guidance and best practices. If a defect emerges later that was not detectable given the state of knowledge at release, this defense may be available, but only if it can be proved what was and wasn’t knowable at that time.
Additionally, companies should implement post-market monitoring to detect issues after deployment and maintain the ability to update or recall AI systems if defects emerge.
Conclusion
Italy’s AI law marks the first national implementation of the EU AI Act and immediately reshapes the compliance landscape for U.S. companies engaged in the Italian market.
U.S. firms must now navigate both EU-wide rules and Italy’s unique national obligations, including criminal provisions for executives and mandatory sector-specific duties. Early preparation will help American companies manage Italy’s existing requirements and those to follow through implementing legislation by October 2026, and also to anticipate similar national frameworks emerging across Europe.
Ilaria Curti is a partner at Portolano Cavallo Studio Legale.
Laura Liguori is a partner at the firm.
Jeremy Maltby is a partner at the firm. He previously served as associate deputy attorney general at the U.S. Department of Justice, and in the White House Counsel’s Office as special assistant and senior counsel to President Barack Obama.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.