The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023 by the US National Institute of Standards and Technology, is a voluntary framework designed to help organizations manage risks associated with AI systems. Unlike the EU AI Act, it is not legally binding, but it has become the de facto standard for AI risk management in the United States and is referenced by federal agencies, industry standards bodies, and international organizations.

Structure

The AI RMF is organized into two parts. Part 1 describes how organizations can frame risks related to AI and outlines the characteristics of trustworthy AI: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.

Part 2 provides the AI RMF Core, structured around four functions: Govern establishes organizational AI risk management policies, roles, and culture. Map identifies and contextualizes AI risks based on the system’s use case and deployment context. Measure employs quantitative and qualitative methods to analyze and monitor AI risks. Manage implements risk treatments including mitigation, transfer, avoidance, or acceptance.

AI RMF Playbook

NIST also published a companion Playbook that provides suggested actions and references for each subcategory in the framework. It offers practical guidance for implementing the framework’s recommendations, though organizations must adapt it to their specific context.

Relationship to Other Standards

The AI RMF aligns with ISO/IEC 42001 (AI Management System), the OECD AI Principles, and the EU AI Act’s risk-based approach. Organizations operating globally can use the AI RMF alongside these frameworks. Many of the AI RMF’s categories map directly to EU AI Act requirements, making it useful as a starting point for organizations that need to comply with both US and EU expectations.

Adoption

While voluntary, the AI RMF has been widely adopted. Federal agencies reference it in procurement requirements. The framework was cited in Executive Order 14110 on AI safety. Industry consortia use it as a baseline for sector-specific AI risk guidance.

Sources

  • National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. (The framework itself; all GOVERN/MAP/MEASURE/MANAGE functions and trustworthy AI characteristics are defined here.)
  • National Institute of Standards and Technology. (2023). AI RMF Playbook. NIST. (Companion document providing suggested actions for each AI RMF subcategory.)
  • Executive Office of the President. (2023). Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The White House. (US federal policy mandating NIST AI RMF adoption for federal agencies; cites AI RMF 1.0 as the foundation.)