Law on Artificial Intelligence

Jurisdiction:
Italy
enforcing
Effective:
Oct 10, 2025
Authority:
Agency for Digital Italy
Official text Verified Mar 28, 2026

Obligations Covered

Human Oversight Transparency & Disclosure Bias & Discrimination Prevention Data Governance

Healthcare AI — Human Oversight #

Obligation:
Human Oversight
enforcing
Effective:
Oct 10, 2025
Risk tier:
high
Scope:
deployers
high-impactcross-domain
Italy is the first EU member state to legislate sector-specific AI rules beyond the EU AI Act. For healthcare AI, the law establishes a hard prohibition on AI making autonomous clinical decisions — physicians retain ultimate authority regardless of AI recommendation quality. Any healthcare organisation deploying diagnostic or treatment AI in Italy must build physician-override workflows into every clinical AI deployment.

Requirements

RequirementDetails
Physician authorityAI systems cannot replace human clinical judgment or make fully automated clinical decisions; physicians retain ultimate decision-making authority
Patient notificationPatients must be informed of AI use, its benefits, and the logic of AI-assisted decision-making before and during care
Support role onlyAI may support prevention, diagnosis, and treatment but must be positioned as a decision-support tool, not a decision-maker
AGENAS platformNational Agency for Regional Health Services (AGENAS) develops a national AI platform to assist medical staff; outputs are non-binding suggestions

Employment AI — Transparency and Disclosure #

Obligation:
Transparency
enforcing
Effective:
Oct 10, 2025
Risk tier:
high
Scope:
deployers
high-impactcross-domain
Italy extends AI transparency duties into employment and child contexts that sit beyond the EU AI Act's direct scope. Employers using AI in recruitment or performance evaluation must disclose AI involvement to workers — creating a specific notification duty for HR technology deployments. The parental consent requirement for under-14s applies to any AI-powered product or service used by children, including education platforms, apps, and consumer AI.

Requirements

RequirementDetails
Worker notificationEmployers must disclose to workers when AI is used in recruitment, performance evaluation, or other employment processes
AI decision logic disclosureEmployers must explain the logic of AI decision-making where AI is involved in employment decisions
Minors consentChildren under 14 require verifiable parental consent before using any AI-powered product or service

Employment AI — Non-Discrimination #

Obligation:
Bias Prevention
enforcing
Effective:
Oct 10, 2025
Risk tier:
high
Scope:
deployers

Requirements

RequirementDetails
Non-discriminationAI systems used in employment must not discriminate; discriminatory AI applications in recruitment or evaluation are prohibited
Data protection complianceEmployers must comply with data protection principles to prevent bias in AI employment systems
Human oversight in employmentEmployers must ensure human oversight of AI-assisted employment decisions

Health Data for AI Research #

Obligation:
Data Governance
enforcing
Effective:
Oct 10, 2025
Risk tier:
all
Scope:
developers, deployers
sleepercross-domain
Italy's secondary-use pathway for health data is a sleeper provision with global reach: any organisation conducting AI research using Italian patient data — including non-Italian researchers accessing Italian health datasets — must satisfy both the GDPR and a 30-day Garante notification before processing. This covers clinical AI model training, drug discovery AI, and public health AI research.

Requirements

RequirementDetails
Secondary use permittedAnonymized or pseudonymized health data may be used for AI research without new patient consent, serving significant public interest
Garante notification30-day advance notification to the Italian Data Protection Authority (Garante) required before commencing AI research using health data
GDPR complianceAll health data processing for AI research must comply with GDPR requirements
Anonymization standardsData must be properly anonymized or pseudonymized before secondary use for AI research