Does General-Purpose AI Code of Practice (GPAI CoP) require Risk Assessment?

European Union • enforcing

Yes — 1 provision

Requirements at a glance

This regulation imposes 5 specific requirements for Risk Assessment across 1 provision:

Systemic Risk Assessment (Article 55) #

Obligation:
Risk Assessment
enforcing
Effective:
Aug 2, 2025
Risk tier:
high
Scope:
providers
high-impact
Applies only to the most powerful GPAI models (above 10²⁵ FLOPs training compute, or Commission-designated). The Safety and Security chapter operationalizes the most demanding tier of EU AI regulation — requiring state-of-the-art adversarial testing, red-teaming, and cybersecurity measures for models that pose systemic risks to the EU.

Requirements

RequirementDetails
Systemic risk assessmentAssess and mitigate systemic risks arising from the GPAI model, including risks to health, safety, fundamental rights, society, and democracy
Adversarial testingConduct adversarial testing and red-teaming to identify dangerous capabilities
Cybersecurity measuresImplement cybersecurity controls appropriate to the model's risk level
Safety practicesApply state-of-the-art safety practices for high-capability model development and deployment
Ongoing monitoringContinuously monitor for emerging systemic risks post-deployment

Penalties

ViolationFine
AI Act Article 55 infringementUp to €15 million or 3% of worldwide annual turnover (whichever is higher)
View full regulation View obligation Obligation matrix