OECD AI Principles - The International Foundation for Trustworthy AI
How the OECD AI Principles became the most widely adopted international framework for responsible AI, influencing policy in over 40 countries.
The OECD Principles on Artificial Intelligence, adopted in May 2019, were the first intergovernmental standard for responsible AI. Originally endorsed by 36 OECD member countries and subsequently adopted by the G20, the principles now have adherence from over 40 countries. They have become the foundational reference point for national AI strategies, regulatory frameworks, and corporate AI ethics policies worldwide.
The Five Principles
The OECD AI Principles are organized into five value-based principles for the responsible stewardship of trustworthy AI.
Inclusive Growth, Sustainable Development, and Well-Being
AI should benefit people and the planet. Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, including augmenting human capabilities, reducing inequalities, and advancing sustainability. This principle establishes that AI is not an end in itself but a tool that should serve broader societal objectives.
Human-Centered Values and Fairness
AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity. They should include appropriate safeguards to enable human intervention where needed. This principle directly addresses concerns about AI systems that perpetuate discrimination or undermine fundamental rights, and it establishes fairness as a core design requirement rather than an afterthought.
Transparency and Explainability
Organizations and individuals developing, deploying, or operating AI systems should provide meaningful information appropriate to the context. This includes fostering a general understanding of AI systems, making stakeholders aware of their interactions with AI, enabling those affected by an AI system to understand and challenge its output, and providing information about the factors and logic that contributed to a prediction or recommendation.
Robustness, Security, and Safety
AI systems should function in a robust, secure, and safe way throughout their lifecycle, and potential risks should be continually assessed and managed. This includes ensuring traceability of datasets, processes, and decisions made during the AI system lifecycle, and enabling analysis of the AI system’s outcomes along with responses to those outcomes.
Accountability
Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles. This principle establishes that the entities responsible for AI systems must be identifiable and answerable for the systems’ behavior and impacts.
Recommendations for Policy Makers
Beyond the five value-based principles, the OECD provides five recommendations for governments:
- Invest in AI research and development - including long-term public investment and interdisciplinary research.
- Foster a digital ecosystem for AI - through data access, computing infrastructure, and interoperability mechanisms.
- Shape an enabling policy environment - by reviewing and adapting regulatory frameworks to encourage innovation while managing risks.
- Build human capacity and prepare for labor market transformation - through education, training, and support for workers in transition.
- Facilitate international cooperation - by sharing information, developing common standards, and working toward interoperable governance frameworks.
The OECD AI Policy Observatory
To support implementation, the OECD created the AI Policy Observatory (OECD.AI), which tracks over 1,000 AI policy initiatives across member and partner countries. This platform provides a shared evidence base for policymakers, enabling them to learn from other countries’ approaches and identify emerging best practices.
Influence on National and International Policy
The OECD AI Principles have shaped AI regulation globally. The EU AI Act references OECD definitions and risk categories. The US National AI Initiative and Executive Order on AI Safety align with OECD principle areas. Japan, Canada, the UK, and Australia have all built national AI strategies that explicitly reference the OECD framework. The G7 Hiroshima Process on AI and the Global Partnership on AI (GPAI) both operate within the conceptual framework established by these principles.
The principles’ broad adoption is partly due to their high-level, technology-neutral formulation. They provide a shared vocabulary and value system without prescribing specific technical implementations, allowing countries and organizations to adapt them to their own legal, cultural, and economic contexts.
Sources
- OECD. “Recommendation of the Council on Artificial Intelligence.” OECD Legal Instruments, OECD/LEGAL/0449, May 22, 2019. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 — The primary legal instrument adopting the five AI principles.
- OECD.AI Policy Observatory. “OECD Principles on AI.” https://oecd.ai/en/ai-principles — Official landing page with updated guidance and country implementation tracking.
- OECD. “G20 AI Principles.” June 2019. https://oecd.ai/en/g20 — G20 adoption of the OECD principles, extending their reach to non-OECD member economies including China, Brazil, and India.
Need help implementing this?
Turn this knowledge into a working prototype. Our structured workshop methodology takes you from idea to deployed AI solution in three sessions.
Explore AI Workshops