Framework Convention on AI, Human Rights, Democracy and Rule of Law (CETS 225)

Jurisdiction:
Council of Europe
enacted
Effective:
Authority:
Council of Europe Secretary General
Official text Verified Mar 26, 2026

Obligations Covered

Transparency & Disclosure Risk Assessment Record-Keeping & Documentation Human Oversight Bias & Discrimination Prevention

Transparency and Notification (Article 8) #

Obligation:
Transparency
pending
Effective:
Invalid Date
Risk tier:
all
Scope:
providers, deployers
high-impactcross-domainupcoming
The first binding international treaty to require notification when a person is interacting with an AI system rather than a human. Applies across all sectors in all ratifying states — including the US, UK, and EU — creating a truly transatlantic baseline for AI disclosure.

Requirements

RequirementDetails
Transparency of AI useAdequate transparency requirements must be in place across the AI system lifecycle
Human-vs-AI notificationPersons interacting with AI systems must be notified they are interacting with AI, not a human, where appropriate in context
Oversight transparencyTransparency requirements must be tailored to specific contexts and risks

Penalties

ViolationFine
Non-complianceBinding treaty — enforcement through domestic implementation; no direct supranational fines

Risk and Impact Management (Article 16) #

Obligation:
Risk Assessment
pending
Effective:
Invalid Date
Risk tier:
all
Scope:
providers, deployers
high-impactcross-domainupcoming
Article 16 goes further than most voluntary frameworks by requiring States to assess whether specific AI uses should be subject to moratoria or outright bans — a tool available under binding international law that has no equivalent in current national AI regulations.

Requirements

RequirementDetails
Lifecycle risk identificationIdentify, assess, prevent, and mitigate risks to human rights, democracy, and rule of law across the AI lifecycle
ProportionalityMeasures must be proportionate to the severity and probability of potential impacts
Graduated approachRisk management must be differentiated based on context and intended use
Pre-deployment testingAI systems must be tested before first use and when significantly modified
Iterative monitoringRisk assessment must be applied continuously throughout the AI lifecycle
Moratoria assessmentStates must assess the need for moratoria or bans on AI uses incompatible with human rights, democracy, or rule of law

Penalties

ViolationFine
Non-complianceBinding treaty — enforcement through domestic implementation and Conference of the Parties oversight

Documentation and Record-Keeping (Articles 14–16) #

Obligation:
Record Keeping
pending
Effective:
Invalid Date
Risk tier:
all
Scope:
providers, deployers

Requirements

RequirementDetails
Risk documentationDocument risks, actual and potential impacts, and risk management approach throughout the AI lifecycle
Contestability documentationMaintain documentation that enables affected persons to challenge AI system outputs
Procedural recordsKeep records sufficient to support fair procedures and appeal rights for affected persons

Penalties

ViolationFine
Non-complianceBinding treaty — enforcement through domestic implementation

Remedies and Procedural Safeguards (Articles 14–15) #

Obligation:
Human Oversight
pending
Effective:
Invalid Date
Risk tier:
all
Scope:
providers, deployers
high-impactcross-domainupcoming
CETS 225 is the first international treaty to establish a right to contest AI decisions. Articles 14–15 create binding remedies and procedural safeguards — including appeal rights and notification — that States must embed in domestic law, surpassing any existing voluntary framework on human oversight.

Requirements

RequirementDetails
Access to redressEnsure effective access to remedies for persons adversely affected by AI system decisions
ContestabilityEnable persons to contest AI-driven outcomes through fair mechanisms
Notification of affected personsNotify individuals subject to AI decisions that affect their rights
Fair proceduresEnsure fair procedural safeguards, including meaningful appeal rights

Penalties

ViolationFine
Non-complianceBinding treaty — enforcement through domestic implementation and Conference of the Parties oversight

Non-Discrimination and Equality (Human Rights Framework) #

Obligation:
Bias Prevention
pending
Effective:
Invalid Date
Risk tier:
all
Scope:
providers, deployers
cross-domain
The Convention's foundational human rights framework (Article 3) explicitly incorporates non-discrimination as a core principle, and Article 16's risk management mandate covers impacts on equality rights. As a treaty built on the European Convention on Human Rights, it binds AI use to existing ECtHR jurisprudence on discrimination.

Requirements

RequirementDetails
Non-discrimination principleAI activities must comply with non-discrimination obligations under international human rights law
Equality risk assessmentRisk management under Article 16 must consider equality and non-discrimination impacts
Human rights compatibilityAI systems must be compatible with democratic values and human rights, including freedom from discrimination

Penalties

ViolationFine
Non-complianceBinding treaty — enforcement through domestic implementation