General-Purpose AI Code of Practice (GPAI CoP)

Jurisdiction:
European Union
enforcing
Effective:
Aug 2, 2025
Authority:
European Commission
Official text Verified Mar 26, 2026

Obligations Covered

Transparency & Disclosure Data Governance Record-Keeping & Documentation Risk Assessment

GPAI Transparency and Documentation (Article 53) #

Obligation:
Transparency
enforcing
Effective:
Aug 2, 2025
Risk tier:
all
Scope:
providers
high-impactcross-domain
The GPAI Code mandates a public-facing Model Documentation Form for every GPAI model — a standardized disclosure covering technical specs, training data, compute, and energy use. This is the first binding-effect transparency template for foundation models globally, operationalizing an EU obligation that applies to providers worldwide.

Requirements

RequirementDetails
Model Documentation FormDraft and maintain a comprehensive Model Documentation Form covering technical specifications, training data characteristics, computational resources, and energy consumption
Downstream disclosureProactively provide documentation to downstream providers integrating the GPAI model into AI systems
Authority disclosureMake documentation available on request to the European AI Office and national competent authorities
Contact publicationPublicly disclose contact information (e.g., website) for documentation requests
GPAI TemplateComplete and publicly disclose a mandatory GPAI Template with training data details

Penalties

ViolationFine
AI Act Article 53 infringementUp to €15 million or 3% of worldwide annual turnover (whichever is higher)

Training Data and Copyright Governance (Article 53) #

Obligation:
Data Governance
enforcing
Effective:
Aug 2, 2025
Risk tier:
all
Scope:
providers
high-impactcross-domain
All GPAI providers must implement copyright-compliant training data policies — including robots.txt compliance, mechanisms to prevent infringing outputs, and public training data disclosure. This directly affects every foundation model provider operating in or serving the EU, making EU copyright law a de facto data governance standard for global AI training pipelines.

Requirements

RequirementDetails
Copyright compliance policyImplement and maintain a policy for compliance with EU copyright law throughout the training data pipeline
Robots.txt complianceHonor robots.txt opt-out protocols when crawling data for training
Infringing output preventionEstablish mechanisms to prevent generation of copyright-infringing outputs
Complaint mechanismCreate a complaint mechanism for rights holders regarding copyright infringements
Training data disclosurePublicly disclose a summary of training data used, including data sources and characteristics

Penalties

ViolationFine
AI Act Article 53 infringementUp to €15 million or 3% of worldwide annual turnover (whichever is higher)

Technical Documentation and Record-Keeping (Article 53) #

Obligation:
Record Keeping
enforcing
Effective:
Aug 2, 2025
Risk tier:
all
Scope:
providers

Requirements

RequirementDetails
Model Documentation Form maintenanceKeep the Model Documentation Form current and updated as the model evolves
Training recordsMaintain records of training data characteristics, sources, and processing
Compute and energy recordsDocument computational resources and energy consumption used in training
Confidential disclosureProvide documentation to AI Office under confidentiality protections when requested

Penalties

ViolationFine
AI Act Article 53 infringementUp to €15 million or 3% of worldwide annual turnover (whichever is higher)

Systemic Risk Assessment (Article 55) #

Obligation:
Risk Assessment
enforcing
Effective:
Aug 2, 2025
Risk tier:
high
Scope:
providers
high-impact
Applies only to the most powerful GPAI models (above 10²⁵ FLOPs training compute, or Commission-designated). The Safety and Security chapter operationalizes the most demanding tier of EU AI regulation — requiring state-of-the-art adversarial testing, red-teaming, and cybersecurity measures for models that pose systemic risks to the EU.

Requirements

RequirementDetails
Systemic risk assessmentAssess and mitigate systemic risks arising from the GPAI model, including risks to health, safety, fundamental rights, society, and democracy
Adversarial testingConduct adversarial testing and red-teaming to identify dangerous capabilities
Cybersecurity measuresImplement cybersecurity controls appropriate to the model's risk level
Safety practicesApply state-of-the-art safety practices for high-capability model development and deployment
Ongoing monitoringContinuously monitor for emerging systemic risks post-deployment

Penalties

ViolationFine
AI Act Article 55 infringementUp to €15 million or 3% of worldwide annual turnover (whichever is higher)