Hiroshima AI Process – Principles & Code of Conduct
Obligations Covered
Risk Assessment Incident Reporting Transparency & Disclosure Human Oversight Record-Keeping & Documentation
Risk Management Lifecycle (Action 1) #
The first and foundational action of the Code — requires risk identification and mitigation throughout the entire AI development and deployment lifecycle. Referenced by the US Executive Order on AI and EU AI Act implementation guidance as a convergent international baseline.
Requirements
| Requirement | Details |
|---|---|
| Lifecycle risk identification | Identify, evaluate, and mitigate risks prior to and throughout development and deployment |
| Pre-deployment assessment | Conduct risk assessments before release of significant new versions |
| Proportionate controls | Apply measures commensurate to the risk level identified |
| Ongoing monitoring | Continuously assess risks as systems evolve and contexts of use change |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Voluntary — no binding enforcement mechanism |
Incident and Vulnerability Management (Action 2) #
Requires post-deployment monitoring for vulnerabilities, incidents, and misuse patterns — effectively a voluntary incident response standard for foundation model developers that national regulators point to as a reference expectation.
Requirements
| Requirement | Details |
|---|---|
| Vulnerability identification | Identify and mitigate security vulnerabilities after deployment |
| Incident response | Address AI incidents promptly; maintain response processes |
| Misuse pattern monitoring | Monitor for patterns of misuse and take corrective action |
| Post-market surveillance | Treat post-deployment oversight as an ongoing obligation |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Voluntary — no binding enforcement mechanism |
Transparency Reporting (Actions 3–4) #
Requires transparency reports for all significant new releases of advanced AI, covering safety evaluations and societal risk assessments. Action 4 adds a cross-industry information-sharing norm — organizations should share safety findings, dangerous capability evaluations, and attempted safeguard circumventions responsibly across the sector.
Requirements
| Requirement | Details |
|---|---|
| Transparency reports | Publish meaningful transparency reports for all significant new releases of advanced AI |
| Safety evaluation disclosure | Include details of safety, security, and societal risk evaluations |
| Human rights risk disclosure | Address potential impacts on human rights in reporting |
| Privacy policy disclosure | Disclose and keep current privacy policies covering personal data, user prompts, and outputs |
| AI interaction labeling | Implement labeling or disclaimers so users know they are interacting with AI |
| Information sharing | Responsibly share evaluation reports, security risks, dangerous capabilities, and circumvention attempts across the sector |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Voluntary — no binding enforcement mechanism |
AI Governance and Accountability (Action 5) #
Requirements
| Requirement | Details |
|---|---|
| AI governance policies | Establish and disclose internal AI governance policies |
| Accountability structures | Create organizational mechanisms to implement governance according to a risk-based approach |
| Lifecycle accountability | Maintain accountability processes to evaluate and mitigate risks throughout the AI lifecycle |
| Self-assessment | Conduct self-assessments against stated policies and commitments |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Voluntary — no binding enforcement mechanism |
Content Authentication and Provenance (Action 7) #
The Code of Conduct is among the first major international frameworks to call for watermarking and provenance mechanisms for AI-generated content — anticipating what is now becoming a mandatory requirement under the EU AI Act and similar national laws. Applies where technically feasible, making it a flexible but politically significant benchmark.
Requirements
| Requirement | Details |
|---|---|
| Content authentication | Develop and deploy reliable content authentication mechanisms where technically feasible |
| Provenance mechanisms | Implement provenance tracking to trace origin of AI-generated content |
| Watermarking | Apply watermarking or equivalent techniques to enable identification of AI-generated content |
| Technical documentation | Maintain technical documentation supporting content authentication capabilities |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Voluntary — no binding enforcement mechanism |
Security Controls (Action 6) #
Action 6 specifically addresses physical security, cybersecurity, and insider threat controls — including protection of model weights, algorithms, servers, and datasets. This cybersecurity-of-AI-systems obligation has no direct 1:1 match in the current obligation ontology; mapped to risk-assessment as the closest fit. Consider adding a dedicated security obligation.
Requirements
| Requirement | Details |
|---|---|
| Physical security | Invest in physical security controls across the AI lifecycle |
| Cybersecurity controls | Implement cybersecurity controls including protection of model weights and algorithms |
| Insider threat safeguards | Establish controls against insider threats targeting AI systems |
| Infrastructure security | Secure servers, datasets, and computational infrastructure |
Penalties
| Violation | Fine |
|---|---|
| Non-compliance | Voluntary — no binding enforcement mechanism |