Framework Convention on AI, Human Rights, Democracy and Rule of Law (CETS 225)
Obligations Covered
Transparency & Disclosure Risk Assessment Record-Keeping & Documentation Human Oversight Bias & Discrimination Prevention
Transparency and Notification (Article 8) #
The first binding international treaty to require notification when a person is interacting with an AI system rather than a human. Applies across all sectors in all ratifying states — including the US, UK, and EU — creating a truly transatlantic baseline for AI disclosure.
Requirements
| Requirement | Details |
|---|---|
| Transparency of AI use | Adequate transparency requirements must be in place across the AI system lifecycle |
| Human-vs-AI notification | Persons interacting with AI systems must be notified they are interacting with AI, not a human, where appropriate in context |
| Oversight transparency | Transparency requirements must be tailored to specific contexts and risks |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Binding treaty — enforcement through domestic implementation; no direct supranational fines |
Risk and Impact Management (Article 16) #
Article 16 goes further than most voluntary frameworks by requiring States to assess whether specific AI uses should be subject to moratoria or outright bans — a tool available under binding international law that has no equivalent in current national AI regulations.
Requirements
| Requirement | Details |
|---|---|
| Lifecycle risk identification | Identify, assess, prevent, and mitigate risks to human rights, democracy, and rule of law across the AI lifecycle |
| Proportionality | Measures must be proportionate to the severity and probability of potential impacts |
| Graduated approach | Risk management must be differentiated based on context and intended use |
| Pre-deployment testing | AI systems must be tested before first use and when significantly modified |
| Iterative monitoring | Risk assessment must be applied continuously throughout the AI lifecycle |
| Moratoria assessment | States must assess the need for moratoria or bans on AI uses incompatible with human rights, democracy, or rule of law |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Binding treaty — enforcement through domestic implementation and Conference of the Parties oversight |
Documentation and Record-Keeping (Articles 14–16) #
Requirements
| Requirement | Details |
|---|---|
| Risk documentation | Document risks, actual and potential impacts, and risk management approach throughout the AI lifecycle |
| Contestability documentation | Maintain documentation that enables affected persons to challenge AI system outputs |
| Procedural records | Keep records sufficient to support fair procedures and appeal rights for affected persons |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Binding treaty — enforcement through domestic implementation |
Remedies and Procedural Safeguards (Articles 14–15) #
CETS 225 is the first international treaty to establish a right to contest AI decisions. Articles 14–15 create binding remedies and procedural safeguards — including appeal rights and notification — that States must embed in domestic law, surpassing any existing voluntary framework on human oversight.
Requirements
| Requirement | Details |
|---|---|
| Access to redress | Ensure effective access to remedies for persons adversely affected by AI system decisions |
| Contestability | Enable persons to contest AI-driven outcomes through fair mechanisms |
| Notification of affected persons | Notify individuals subject to AI decisions that affect their rights |
| Fair procedures | Ensure fair procedural safeguards, including meaningful appeal rights |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Binding treaty — enforcement through domestic implementation and Conference of the Parties oversight |
Non-Discrimination and Equality (Human Rights Framework) #
The Convention's foundational human rights framework (Article 3) explicitly incorporates non-discrimination as a core principle, and Article 16's risk management mandate covers impacts on equality rights. As a treaty built on the European Convention on Human Rights, it binds AI use to existing ECtHR jurisprudence on discrimination.
Requirements
| Requirement | Details |
|---|---|
| Non-discrimination principle | AI activities must comply with non-discrimination obligations under international human rights law |
| Equality risk assessment | Risk management under Article 16 must consider equality and non-discrimination impacts |
| Human rights compatibility | AI systems must be compatible with democratic values and human rights, including freedom from discrimination |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Binding treaty — enforcement through domestic implementation |