Practical Steps for EU AI Act Compliance
What the EU AI Act requires, which of your AI systems are affected, and concrete steps to achieve and maintain compliance.
The EU AI Act is the first comprehensive AI regulation. It applies to organizations that develop or deploy AI systems within the EU, regardless of where the organization is based. Compliance is not optional, and penalties for non-compliance reach up to 35 million euros or 7% of global annual turnover. This guide covers what you need to do, organized by practical steps rather than legal articles.
Understanding the Risk Categories
The Act classifies AI systems by risk level, with requirements scaled accordingly:
Unacceptable risk (banned). Social scoring by governments, real-time remote biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, and manipulation techniques targeting vulnerable groups. If your system falls here, stop deploying it in the EU.
High risk. AI systems used in critical areas: employment and worker management, access to education, creditworthiness assessment, law enforcement, migration and border control, and safety components of regulated products. These face the most extensive requirements.
Limited risk. Systems that interact with people (chatbots), generate synthetic content (deepfakes), or perform emotion recognition or biometric categorization outside the banned contexts. These have transparency obligations.
Minimal risk. Everything else. No specific requirements, though voluntary codes of practice are encouraged.
Step 1 - Inventory Your AI Systems
Before you can comply, you need to know what you have. Conduct an inventory of all AI systems your organization develops, deploys, or procures. For each system, document its purpose, the data it uses, who it affects, and where it is deployed. Classify each system by the Act’s risk categories.
Step 2 - Address High-Risk System Requirements
For each high-risk system, you must implement:
Risk management system. A documented process for identifying, analyzing, estimating, and evaluating risks throughout the system’s lifecycle. This is not a one-time assessment but an ongoing process.
Data governance. Training, validation, and testing datasets must meet quality criteria: relevance, representativeness, completeness, and freedom from errors. Document your data practices.
Technical documentation. Detailed documentation of the system’s design, development, capabilities, limitations, and performance. This must be sufficient for authorities to assess compliance.
Record-keeping. Automatic logging of system operations to enable traceability. Logs must be retained for a period appropriate to the system’s intended purpose.
Transparency. Provide clear information to deployers about the system’s capabilities, limitations, intended purpose, and human oversight requirements.
Human oversight. Design the system so it can be effectively overseen by humans. This includes the ability to interpret outputs, decide not to use the system, intervene in or stop the system, and override outputs.
Accuracy, robustness, and cybersecurity. The system must perform consistently and be resilient to errors, faults, and attempts at manipulation.
Step 3 - Meet Transparency Obligations
For limited-risk systems, ensure users are informed when they are interacting with AI, when content is AI-generated, and when emotion recognition or biometric categorization is being used.
For general-purpose AI models (including LLMs), providers must publish training content summaries and comply with EU copyright law.
Step 4 - Implement AI Literacy
The Act requires organizations to ensure staff involved in the operation and use of AI systems have sufficient AI literacy. This means training programs that cover what AI can and cannot do, how your specific systems work, and how to exercise appropriate oversight.
Step 5 - Establish Ongoing Compliance
Conformity assessments. High-risk systems require conformity assessments before deployment and after significant modifications. Some categories require third-party assessment.
Post-market monitoring. Maintain a system for monitoring AI system performance after deployment. Collect and analyze data on performance, incidents, and user feedback.
Incident reporting. Report serious incidents to the relevant national authority. Define internal processes for detecting, investigating, and reporting incidents.
Documentation maintenance. Keep technical documentation current as systems evolve. This is particularly challenging for systems that undergo continuous training.
Timeline
Provisions are being phased in. Bans on prohibited systems are already applicable. High-risk system requirements apply from August 2026. Ensure your compliance program accounts for these deadlines with sufficient lead time for implementation.
Need help implementing this?
Turn this knowledge into a working prototype. Our structured workshop methodology takes you from idea to deployed AI solution in three sessions.
Explore AI Workshops