Law on Artificial Intelligence
Obligations Covered
Risk Assessment Transparency & Disclosure Bias & Discrimination Prevention Record-Keeping & Documentation
Risk-Based Classification and Management #
Kazakhstan's AI law is the first in Central Asia, establishing a three-tier risk framework (minimum/medium/high) that directly mirrors the EU AI Act's approach. High-risk AI systems must use the state National AI Platform for development and testing — a unique state-platform requirement not seen in Western AI laws.
Requirements
| Requirement | Details |
|---|---|
| Risk classification | Owners/holders must classify AI systems by risk degree (minimum, medium, high) based on potential impact on safety, rights, freedoms, and public order |
| Risk identification | Identify and analyse known and foreseeable risks across the AI system lifecycle |
| Risk mitigation | Implement safety and reliability measures commensurate with risk tier |
| Documentation | Maintain tier-specific documentation per lists approved by the Ministry of AI and Digital Development |
| High-risk audits | High-risk AI systems subject to enhanced scrutiny; audits implied via Ministry oversight |
| National AI Platform | High-risk system development and testing must use the state National AI Platform operated by National Information Technologies JSC |
Synthetic Content Labeling and User Notification #
Kazakhstan mandates machine-readable markings on all distributed synthetic AI outputs (images, text, video) — a technically specific requirement that affects any AI system generating content for Kazakh users. Combined with advance user notification of AI involvement, this creates dual transparency obligations covering both the content itself and the service interaction.
Requirements
| Requirement | Details |
|---|---|
| Synthetic output labeling | All distributed synthetic content (images, text, video) generated by AI must include machine-readable markings and visible/other warnings |
| User notification | Users must be notified in advance of AI use in goods, works, or services before interaction |
| Terms of use | Terms governing AI system use must be provided to users before use |
Prohibited AI Practices #
Kazakhstan's prohibition list covers social scoring and biometric discrimination — two categories that directly constrain AI systems used in hiring, lending, and public services. The ban on subconscious manipulation techniques is broadly worded and could catch persuasion AI, recommender systems, and targeted advertising tools.
Requirements
| Requirement | Details |
|---|---|
| Manipulative techniques banned | Prohibited to use AI to exert subconscious influence or exploit user vulnerabilities |
| Social scoring banned | AI-based social scoring of citizens is prohibited |
| Biometric discrimination banned | AI-based discrimination using biometric data is prohibited |
| Emotion detection restricted | Unauthorized emotion detection without consent is prohibited |
| Anti-competitive practices banned | Restricting AI development, marketing, or implementation through anti-competitive AI practices is prohibited |
Documentation and Record-Keeping #
Requirements
| Requirement | Details |
|---|---|
| Tier-specific documentation | Maintain documentation per lists approved by the Ministry of AI and Digital Development; depth scales with risk tier |
| Risk records | Document risk identification, analysis of known and foreseeable risks, and safety/reliability measures taken |
| User support records | Maintain records of user support obligations and terms of use provided |
| Ministry approval | Documentation list formats are approved by the Ministry; owners must comply with current approved lists |