Does General-Purpose AI Code of Practice (GPAI CoP) require Risk Assessment?
European Union • enforcing
Yes — 1 provision
Requirements at a glance
This regulation imposes 5 specific requirements for Risk Assessment across 1 provision:
- Systemic risk assessment — Assess and mitigate systemic risks arising from the GPAI model, including risks to health, safety, fundamental rights, society, and democracy
- Adversarial testing — Conduct adversarial testing and red-teaming to identify dangerous capabilities
- Cybersecurity measures — Implement cybersecurity controls appropriate to the model's risk level
- Safety practices — Apply state-of-the-art safety practices for high-capability model development and deployment
- Ongoing monitoring — Continuously monitor for emerging systemic risks post-deployment
Systemic Risk Assessment (Article 55) #
Applies only to the most powerful GPAI models (above 10²⁵ FLOPs training compute, or Commission-designated). The Safety and Security chapter operationalizes the most demanding tier of EU AI regulation — requiring state-of-the-art adversarial testing, red-teaming, and cybersecurity measures for models that pose systemic risks to the EU.
Requirements
| Requirement | Details |
|---|---|
| Systemic risk assessment | Assess and mitigate systemic risks arising from the GPAI model, including risks to health, safety, fundamental rights, society, and democracy |
| Adversarial testing | Conduct adversarial testing and red-teaming to identify dangerous capabilities |
| Cybersecurity measures | Implement cybersecurity controls appropriate to the model's risk level |
| Safety practices | Apply state-of-the-art safety practices for high-capability model development and deployment |
| Ongoing monitoring | Continuously monitor for emerging systemic risks post-deployment |
Penalties
| Violation | Fine |
|---|---|
| AI Act Article 55 infringement | Up to €15 million or 3% of worldwide annual turnover (whichever is higher) |