GPAI Transparency and Documentation (Article 53) #
The GPAI Code mandates a public-facing Model Documentation Form for every GPAI model — a standardized disclosure covering technical specs, training data, compute, and energy use. This is the first binding-effect transparency template for foundation models globally, operationalizing an EU obligation that applies to providers worldwide.
Requirements
| Requirement | Details |
|---|
| Model Documentation Form | Draft and maintain a comprehensive Model Documentation Form covering technical specifications, training data characteristics, computational resources, and energy consumption |
| Downstream disclosure | Proactively provide documentation to downstream providers integrating the GPAI model into AI systems |
| Authority disclosure | Make documentation available on request to the European AI Office and national competent authorities |
| Contact publication | Publicly disclose contact information (e.g., website) for documentation requests |
| GPAI Template | Complete and publicly disclose a mandatory GPAI Template with training data details |
Penalties
| Violation | Fine |
|---|
| AI Act Article 53 infringement | Up to €15 million or 3% of worldwide annual turnover (whichever is higher) |
Training Data and Copyright Governance (Article 53) #
All GPAI providers must implement copyright-compliant training data policies — including robots.txt compliance, mechanisms to prevent infringing outputs, and public training data disclosure. This directly affects every foundation model provider operating in or serving the EU, making EU copyright law a de facto data governance standard for global AI training pipelines.
Requirements
| Requirement | Details |
|---|
| Copyright compliance policy | Implement and maintain a policy for compliance with EU copyright law throughout the training data pipeline |
| Robots.txt compliance | Honor robots.txt opt-out protocols when crawling data for training |
| Infringing output prevention | Establish mechanisms to prevent generation of copyright-infringing outputs |
| Complaint mechanism | Create a complaint mechanism for rights holders regarding copyright infringements |
| Training data disclosure | Publicly disclose a summary of training data used, including data sources and characteristics |
Penalties
| Violation | Fine |
|---|
| AI Act Article 53 infringement | Up to €15 million or 3% of worldwide annual turnover (whichever is higher) |
Systemic Risk Assessment (Article 55) #
Applies only to the most powerful GPAI models (above 10²⁵ FLOPs training compute, or Commission-designated). The Safety and Security chapter operationalizes the most demanding tier of EU AI regulation — requiring state-of-the-art adversarial testing, red-teaming, and cybersecurity measures for models that pose systemic risks to the EU.
Requirements
| Requirement | Details |
|---|
| Systemic risk assessment | Assess and mitigate systemic risks arising from the GPAI model, including risks to health, safety, fundamental rights, society, and democracy |
| Adversarial testing | Conduct adversarial testing and red-teaming to identify dangerous capabilities |
| Cybersecurity measures | Implement cybersecurity controls appropriate to the model's risk level |
| Safety practices | Apply state-of-the-art safety practices for high-capability model development and deployment |
| Ongoing monitoring | Continuously monitor for emerging systemic risks post-deployment |
Penalties
| Violation | Fine |
|---|
| AI Act Article 55 infringement | Up to €15 million or 3% of worldwide annual turnover (whichever is higher) |