AI regulation is evolving rapidly across jurisdictions. Organizations developing or deploying AI globally must navigate an increasingly complex patchwork of laws, frameworks, and standards. This overview maps the current landscape as of early 2026.

European Union

The EU has the most comprehensive AI regulatory framework globally.

EU AI Act (Regulation (EU) 2024/1689) - The world’s first comprehensive AI law, establishing a risk-based classification system. Prohibitions on unacceptable-risk AI practices applied from February 2025. High-risk AI system requirements and GPAI model obligations apply from August 2025. Full enforcement including obligations for high-risk systems in Annex III applies from August 2026. Enforced by national market surveillance authorities and the EU AI Office.

GDPR remains the primary constraint on AI systems processing personal data. NIS2 covers cybersecurity of AI systems in critical sectors. DORA adds resilience requirements for financial AI. Cyber Resilience Act will cover AI products with digital elements.

United States

The US relies on a sectoral approach with no single comprehensive AI law.

Executive Order 14110 (October 2023) directed federal agencies to manage AI risks and directed NIST to develop AI safety standards. NIST AI RMF provides the primary voluntary framework. Sector regulators apply existing authority: FDA oversees AI in medical devices, SEC monitors AI in financial markets, FTC enforces against deceptive AI practices, and EEOC addresses AI in employment decisions.

State legislation is accelerating. Colorado enacted the first comprehensive state AI law. California, Illinois, Texas, and others have proposed or enacted targeted AI legislation covering areas like deepfakes, employment decisions, and consumer protection.

China

China has moved quickly with binding, technology-specific regulations.

Algorithmic Recommendation Regulation (2022) governs recommendation algorithms. Deep Synthesis Regulation (2023) covers deepfakes and synthetic content. Generative AI Regulation (2023) governs generative AI services offered to the public. AI Safety Governance Framework provides broader guidance. China’s approach is notable for its speed and specificity, regulating individual AI technologies as they emerge.

United Kingdom

The UK initially adopted a principles-based, sector-led approach without a single AI law, relying on existing regulators (FCA, ICO, CMA, Ofcom) to apply AI principles within their domains. However, the AI Safety Institute established in 2023 is developing more structured evaluation frameworks, and proposals for binding AI legislation are under consideration.

Other Key Jurisdictions

Canada - The Artificial Intelligence and Data Act (AIDA) was proposed as part of the Digital Charter Implementation Act. Brazil - The AI regulatory framework (PL 2338/2023) is progressing through the legislature with a risk-based approach similar to the EU. Japan - Relies on voluntary guidelines with a sector-specific approach, aligned with G7 Hiroshima AI Process principles. South Korea - The AI Basic Act provides a framework for AI governance with a focus on high-impact AI. India - No comprehensive AI legislation yet, but sectoral regulators are issuing guidance, particularly SEBI for financial AI.

International Coordination

G7 Hiroshima AI Process established international guiding principles and a code of conduct for advanced AI systems. OECD AI Principles provide a foundational framework adopted by over 40 countries. UN AI Advisory Body is developing global AI governance recommendations. Council of Europe AI Convention is the first legally binding international treaty on AI, focused on human rights and democracy.

Strategic Implications

Organizations should adopt a compliance strategy that builds to the strictest applicable standard (typically the EU AI Act for global deployments), implement modular compliance architectures that can adapt to new requirements, monitor regulatory developments in all markets where they operate, and engage with standards bodies and industry groups to influence emerging regulations.