Does Framework Convention on AI, Human Rights, Democracy and Rule of Law (CETS 225) require Risk Assessment?

Council of Europe • enacted

Yes — 1 provision

Requirements at a glance

This regulation imposes 6 specific requirements for Risk Assessment across 1 provision:

Risk and Impact Management (Article 16) #

Obligation:
Risk Assessment
pending
Effective:
Invalid Date
Risk tier:
all
Scope:
providers, deployers
high-impactcross-domainupcoming
Article 16 goes further than most voluntary frameworks by requiring States to assess whether specific AI uses should be subject to moratoria or outright bans — a tool available under binding international law that has no equivalent in current national AI regulations.

Requirements

RequirementDetails
Lifecycle risk identificationIdentify, assess, prevent, and mitigate risks to human rights, democracy, and rule of law across the AI lifecycle
ProportionalityMeasures must be proportionate to the severity and probability of potential impacts
Graduated approachRisk management must be differentiated based on context and intended use
Pre-deployment testingAI systems must be tested before first use and when significantly modified
Iterative monitoringRisk assessment must be applied continuously throughout the AI lifecycle
Moratoria assessmentStates must assess the need for moratoria or bans on AI uses incompatible with human rights, democracy, or rule of law

Penalties

ViolationFine
Non-complianceBinding treaty — enforcement through domestic implementation and Conference of the Parties oversight
View full regulation View obligation Obligation matrix