Does Law on Artificial Intelligence require Risk Assessment?

Kazakhstan • enforcing

Yes — 1 provision

Requirements at a glance

This regulation imposes 6 specific requirements for Risk Assessment across 1 provision:

Risk-Based Classification and Management #

Obligation:
Risk Assessment
enforcing
Effective:
Jan 18, 2026
Risk tier:
all (tiered obligations)
Scope:
developers, deployers
high-impact
Kazakhstan's AI law is the first in Central Asia, establishing a three-tier risk framework (minimum/medium/high) that directly mirrors the EU AI Act's approach. High-risk AI systems must use the state National AI Platform for development and testing — a unique state-platform requirement not seen in Western AI laws.

Requirements

RequirementDetails
Risk classificationOwners/holders must classify AI systems by risk degree (minimum, medium, high) based on potential impact on safety, rights, freedoms, and public order
Risk identificationIdentify and analyse known and foreseeable risks across the AI system lifecycle
Risk mitigationImplement safety and reliability measures commensurate with risk tier
DocumentationMaintain tier-specific documentation per lists approved by the Ministry of AI and Digital Development
High-risk auditsHigh-risk AI systems subject to enhanced scrutiny; audits implied via Ministry oversight
National AI PlatformHigh-risk system development and testing must use the state National AI Platform operated by National Information Technologies JSC
View full regulation View obligation Obligation matrix