The first and foundational action of the Code — requires risk identification and mitigation throughout the entire AI development and deployment lifecycle. Referenced by the US Executive Order on AI and EU AI Act implementation guidance as a convergent international baseline.
Requirements
Requirement
Details
Lifecycle risk identification
Identify, evaluate, and mitigate risks prior to and throughout development and deployment
Pre-deployment assessment
Conduct risk assessments before release of significant new versions
Proportionate controls
Apply measures commensurate to the risk level identified
Ongoing monitoring
Continuously assess risks as systems evolve and contexts of use change
Action 6 specifically addresses physical security, cybersecurity, and insider threat controls — including protection of model weights, algorithms, servers, and datasets. This cybersecurity-of-AI-systems obligation has no direct 1:1 match in the current obligation ontology; mapped to risk-assessment as the closest fit. Consider adding a dedicated security obligation.
Requirements
Requirement
Details
Physical security
Invest in physical security controls across the AI lifecycle
Cybersecurity controls
Implement cybersecurity controls including protection of model weights and algorithms
Insider threat safeguards
Establish controls against insider threats targeting AI systems
Infrastructure security
Secure servers, datasets, and computational infrastructure