Hiroshima AI Process – Principles & Code of Conduct

Jurisdiction:
G7
voluntary
Effective:
Oct 30, 2023
Authority:
G7 Leaders
Official text Verified Mar 26, 2026

Obligations Covered

Risk Assessment Incident Reporting Transparency & Disclosure Human Oversight Record-Keeping & Documentation

Risk Management Lifecycle (Action 1) #

Obligation:
Risk Assessment
enforcing
Effective:
Oct 30, 2023
Risk tier:
all
Scope:
providers
high-impactcross-domain
The first and foundational action of the Code — requires risk identification and mitigation throughout the entire AI development and deployment lifecycle. Referenced by the US Executive Order on AI and EU AI Act implementation guidance as a convergent international baseline.

Requirements

RequirementDetails
Lifecycle risk identificationIdentify, evaluate, and mitigate risks prior to and throughout development and deployment
Pre-deployment assessmentConduct risk assessments before release of significant new versions
Proportionate controlsApply measures commensurate to the risk level identified
Ongoing monitoringContinuously assess risks as systems evolve and contexts of use change

Penalties

ViolationFine
Non-complianceVoluntary — no binding enforcement mechanism

Incident and Vulnerability Management (Action 2) #

Obligation:
Incident Reporting
enforcing
Effective:
Oct 30, 2023
Risk tier:
all
Scope:
providers
cross-domain
Requires post-deployment monitoring for vulnerabilities, incidents, and misuse patterns — effectively a voluntary incident response standard for foundation model developers that national regulators point to as a reference expectation.

Requirements

RequirementDetails
Vulnerability identificationIdentify and mitigate security vulnerabilities after deployment
Incident responseAddress AI incidents promptly; maintain response processes
Misuse pattern monitoringMonitor for patterns of misuse and take corrective action
Post-market surveillanceTreat post-deployment oversight as an ongoing obligation

Penalties

ViolationFine
Non-complianceVoluntary — no binding enforcement mechanism

Transparency Reporting (Actions 3–4) #

Obligation:
Transparency
enforcing
Effective:
Oct 30, 2023
Risk tier:
all
Scope:
providers
high-impactcross-domain
Requires transparency reports for all significant new releases of advanced AI, covering safety evaluations and societal risk assessments. Action 4 adds a cross-industry information-sharing norm — organizations should share safety findings, dangerous capability evaluations, and attempted safeguard circumventions responsibly across the sector.

Requirements

RequirementDetails
Transparency reportsPublish meaningful transparency reports for all significant new releases of advanced AI
Safety evaluation disclosureInclude details of safety, security, and societal risk evaluations
Human rights risk disclosureAddress potential impacts on human rights in reporting
Privacy policy disclosureDisclose and keep current privacy policies covering personal data, user prompts, and outputs
AI interaction labelingImplement labeling or disclaimers so users know they are interacting with AI
Information sharingResponsibly share evaluation reports, security risks, dangerous capabilities, and circumvention attempts across the sector

Penalties

ViolationFine
Non-complianceVoluntary — no binding enforcement mechanism

AI Governance and Accountability (Action 5) #

Obligation:
Human Oversight
enforcing
Effective:
Oct 30, 2023
Risk tier:
all
Scope:
providers

Requirements

RequirementDetails
AI governance policiesEstablish and disclose internal AI governance policies
Accountability structuresCreate organizational mechanisms to implement governance according to a risk-based approach
Lifecycle accountabilityMaintain accountability processes to evaluate and mitigate risks throughout the AI lifecycle
Self-assessmentConduct self-assessments against stated policies and commitments

Penalties

ViolationFine
Non-complianceVoluntary — no binding enforcement mechanism

Content Authentication and Provenance (Action 7) #

Obligation:
Record Keeping
enforcing
Effective:
Oct 30, 2023
Risk tier:
all
Scope:
providers
high-impactcross-domain
The Code of Conduct is among the first major international frameworks to call for watermarking and provenance mechanisms for AI-generated content — anticipating what is now becoming a mandatory requirement under the EU AI Act and similar national laws. Applies where technically feasible, making it a flexible but politically significant benchmark.

Requirements

RequirementDetails
Content authenticationDevelop and deploy reliable content authentication mechanisms where technically feasible
Provenance mechanismsImplement provenance tracking to trace origin of AI-generated content
WatermarkingApply watermarking or equivalent techniques to enable identification of AI-generated content
Technical documentationMaintain technical documentation supporting content authentication capabilities

Penalties

ViolationFine
Non-complianceVoluntary — no binding enforcement mechanism

Security Controls (Action 6) #

Obligation:
Risk Assessment
enforcing
Effective:
Oct 30, 2023
Risk tier:
all
Scope:
providers
cross-domain
Action 6 specifically addresses physical security, cybersecurity, and insider threat controls — including protection of model weights, algorithms, servers, and datasets. This cybersecurity-of-AI-systems obligation has no direct 1:1 match in the current obligation ontology; mapped to risk-assessment as the closest fit. Consider adding a dedicated security obligation.

Requirements

RequirementDetails
Physical securityInvest in physical security controls across the AI lifecycle
Cybersecurity controlsImplement cybersecurity controls including protection of model weights and algorithms
Insider threat safeguardsEstablish controls against insider threats targeting AI systems
Infrastructure securitySecure servers, datasets, and computational infrastructure

Penalties

ViolationFine
Non-complianceVoluntary — no binding enforcement mechanism