<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI Frameworks on AI Solutions Wiki</title><link>https://ai-solutions.wiki/frameworks/</link><description>Recent content in AI Frameworks on AI Solutions Wiki</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sat, 28 Mar 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://ai-solutions.wiki/frameworks/index.xml" rel="self" type="application/rss+xml"/><item><title>Agile AI Delivery - Iterative Development for AI Projects</title><link>https://ai-solutions.wiki/frameworks/agile-ai-delivery/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/agile-ai-delivery/</guid><description>Agile methodologies were designed for software development where requirements can be broken into user stories and progress is measured by working software delivered each sprint. AI projects break this model in specific ways: model performance is not predictable from the backlog, data quality issues surface mid-sprint, and &amp;ldquo;done&amp;rdquo; is a probability rather than a binary state. Agile AI Delivery adapts standard Agile practices to accommodate these differences while preserving the iterative, feedback-driven philosophy that makes Agile effective.</description></item><item><title>AI Ethics Framework</title><link>https://ai-solutions.wiki/frameworks/ai-ethics-framework/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/ai-ethics-framework/</guid><description>The AI Ethics Framework provides a structured approach to evaluating the ethical implications of AI systems throughout their lifecycle. It moves ethical considerations from abstract principles to concrete, actionable review processes that integrate into the AI development and deployment workflow.
Framework Principles Beneficence - AI systems should create genuine value for their intended users and broader society. The expected benefits must be clearly articulated and measured, not assumed. If the primary beneficiary of an AI system is the deploying organization rather than the people affected by its decisions, additional scrutiny is warranted.</description></item><item><title>AI Maturity Model - Assessing Organizational AI Readiness</title><link>https://ai-solutions.wiki/frameworks/maturity-model-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/maturity-model-ai/</guid><description>An AI maturity model provides a structured assessment of where an organization stands in its AI journey and what capabilities it needs to develop next. The model defines progressive levels of maturity across multiple dimensions, giving leadership a shared vocabulary for current state and a roadmap for improvement. This framework defines five levels across five dimensions, producing a practical assessment that informs AI strategy and investment priorities.
The Five Maturity Levels Level 1: Exploring - The organization is investigating AI possibilities.</description></item><item><title>AI Value Realization - Measuring and Demonstrating ROI from AI Investments</title><link>https://ai-solutions.wiki/frameworks/ai-value-realization/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/ai-value-realization/</guid><description>Most organizations struggle to demonstrate concrete returns from their AI investments. McKinsey&amp;rsquo;s research consistently shows that while AI adoption is increasing, fewer than 25% of organizations report significant financial impact from AI. The gap between AI investment and AI value realization is not primarily a technology problem; it is a measurement and management problem. This framework provides a structured approach to defining, tracking, and communicating the value AI delivers.
The Value Realization Challenge AI value is difficult to measure for several reasons.</description></item><item><title>Architecture Decision Records and Evaluation Methods</title><link>https://ai-solutions.wiki/frameworks/architecture-decision-records/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/architecture-decision-records/</guid><description>Architecture decisions in AI systems are harder to reverse than in traditional software. Choosing a batch inference pipeline over real-time serving, selecting a feature store, or deciding between fine-tuning and RAG all have long-lasting consequences. Architecture Decision Records (ADRs) provide a lightweight method to document these decisions so future teams understand not just what was decided, but why.
What Is an ADR An ADR is a short document that captures a single architecture decision.</description></item><item><title>BCG AI at Scale - The 10-20-70 Rule for Enterprise AI</title><link>https://ai-solutions.wiki/frameworks/bcg-ai-at-scale/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/bcg-ai-at-scale/</guid><description>Boston Consulting Group&amp;rsquo;s research on scaling AI across large enterprises identified a persistent pattern: organizations that succeed with AI invest fundamentally differently from those that stall after initial pilots. BCG codified this finding as the 10-20-70 rule, which states that only 10% of the effort in successful AI transformation involves algorithms and models, 20% involves data and technology infrastructure, and 70% involves business process transformation and people change management.
The 10-20-70 Breakdown 10% - Algorithms and Models The AI models themselves represent the smallest share of the effort required to achieve business impact.</description></item><item><title>Build-Measure-Learn for AI - Rapid Experimentation Cycles</title><link>https://ai-solutions.wiki/frameworks/build-measure-learn/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/build-measure-learn/</guid><description>Build-Measure-Learn (BML) is the core feedback loop from the Lean Startup methodology, and it maps naturally to AI development. Every AI project is fundamentally an experiment: you hypothesize that a model can solve a problem, build a version, measure its performance, and learn whether to continue, adjust, or pivot. The framework&amp;rsquo;s value is in making this cycle explicit, fast, and disciplined rather than allowing open-ended experimentation that consumes time without producing decisions.</description></item><item><title>Capability Mapping for AI - Identifying Automation Opportunities</title><link>https://ai-solutions.wiki/frameworks/capability-mapping/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/capability-mapping/</guid><description>Capability mapping creates a structured inventory of what an organization does (its capabilities), independent of how it does them (its processes and systems). A capability like &amp;ldquo;Customer Identity Verification&amp;rdquo; exists whether it is done manually by a human, by a rules engine, or by an AI model. For AI strategy, capability mapping provides a systematic way to identify where AI can enhance existing capabilities, which capabilities are ripe for automation, and where AI could enable entirely new capabilities.</description></item><item><title>Compound AI Systems - Architecture Framework for Multi-Model Coordination</title><link>https://ai-solutions.wiki/frameworks/compound-ai-systems/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/compound-ai-systems/</guid><description>The term &amp;ldquo;compound AI system&amp;rdquo; describes an AI system that combines multiple components &amp;ndash; language models, retrievers, code executors, tools, and programmatic control logic &amp;ndash; to accomplish tasks that no single model can handle reliably on its own. The concept was formalized by researchers at Berkeley AI Research (BAIR) in 2024, reflecting a shift in how production AI systems are built: away from monolithic models and toward systems of interacting components.</description></item><item><title>Cost-Benefit Analysis for AI - Building the Business Case</title><link>https://ai-solutions.wiki/frameworks/cost-benefit-analysis-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/cost-benefit-analysis-ai/</guid><description>Cost-benefit analysis (CBA) for AI projects quantifies the financial investment required and the value returned, producing ROI projections that inform go/no-go decisions. AI projects have cost structures that differ from traditional software: model training compute costs, ongoing inference costs, data labeling expenses, and the probabilistic nature of outcomes. A rigorous CBA accounts for these differences and presents a realistic case to decision-makers.
Cost Categories Development Costs (One-Time) Data preparation - Acquiring, cleaning, labeling, and transforming training data.</description></item><item><title>CRISP-DM: Cross-Industry Standard Process for Data Mining</title><link>https://ai-solutions.wiki/frameworks/crisp-dm/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/crisp-dm/</guid><description>CRISP-DM (Cross-Industry Standard Process for Data Mining) has been the dominant methodology for data science projects since its introduction in 1996. Despite its age, it remains the most commonly used framework because its six phases map naturally to how data science work actually happens - including the messy, iterative reality that linear project management frameworks miss.
The Six Phases 1. Business Understanding The most important and most frequently skipped phase. Before touching data, define the business problem clearly.</description></item><item><title>Data Fabric Framework - Metadata-Driven Architecture for Connected Data</title><link>https://ai-solutions.wiki/frameworks/data-fabric-framework/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/data-fabric-framework/</guid><description>Data fabric is an architectural approach that uses metadata, knowledge graphs, and machine learning to create a unified, intelligent layer over an organization&amp;rsquo;s diverse data sources. Rather than moving all data into a single centralized repository, data fabric connects data where it lives and uses active metadata to automate data discovery, governance, integration, and delivery. Gartner has identified data fabric as a top data and analytics trend, and the approach is increasingly adopted by enterprises that need to make their data AI-ready without undertaking massive data consolidation projects.</description></item><item><title>Data Sovereignty Framework for AI in the EU</title><link>https://ai-solutions.wiki/frameworks/data-sovereignty-framework/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/data-sovereignty-framework/</guid><description>Data sovereignty for AI systems in the EU requires a systematic approach that addresses legal requirements, technical architecture, and operational governance. This framework provides a structured method for organizations to establish and maintain data sovereignty across their AI operations.
Legal Foundation Understand the applicable legal requirements. GDPR Chapter V governs international data transfers with specific transfer mechanisms (adequacy decisions, SCCs, BCRs). National data sovereignty laws may impose additional requirements, particularly for government data, health data, and financial data.</description></item><item><title>Design Thinking for AI - Human-Centered AI Development</title><link>https://ai-solutions.wiki/frameworks/design-thinking-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/design-thinking-ai/</guid><description>Design Thinking is a problem-solving methodology that starts with understanding the user&amp;rsquo;s needs rather than the available technology. It follows five phases: Empathize, Define, Ideate, Prototype, and Test. For AI projects, Design Thinking prevents the most common failure mode: building an impressive AI capability that solves a problem nobody has. The methodology ensures that AI solutions are grounded in real user needs and designed for how people actually work.
Why Design Thinking Matters for AI AI projects are particularly susceptible to technology-driven thinking.</description></item><item><title>DORA - Digital Operational Resilience Act Framework</title><link>https://ai-solutions.wiki/frameworks/dora-framework/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/dora-framework/</guid><description>The Digital Operational Resilience Act (Regulation 2022/2554) is an EU regulation that strengthens the digital resilience of financial entities. DORA entered into application on 17 January 2025 and applies to 20 types of financial entities and their ICT third-party service providers. It harmonizes ICT risk management rules across the financial sector, replacing fragmented national approaches with a single EU-wide framework.
Scope DORA applies to banks and credit institutions, investment firms, insurance and reinsurance undertakings, payment institutions, electronic money institutions, crypto-asset service providers, central securities depositories, central counterparties, trading venues, trade repositories, managers of alternative investment funds and UCITS, crowdfunding service providers, and ICT third-party service providers serving financial entities.</description></item><item><title>Double Diamond for AI - Diverge and Converge Twice</title><link>https://ai-solutions.wiki/frameworks/double-diamond-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/double-diamond-ai/</guid><description>The Double Diamond is a design process model from the UK Design Council that structures work into four phases: Discover, Define, Develop, and Deliver. The process diverges (exploring broadly) and converges (focusing narrowly) twice, forming two diamond shapes. The first diamond finds the right problem. The second diamond finds the right solution. For AI projects, the Double Diamond prevents the common failure of solving the wrong problem with the right technology.</description></item><item><title>Enterprise Cloud Governance Framework</title><link>https://ai-solutions.wiki/frameworks/cloud-governance-framework/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/cloud-governance-framework/</guid><description>Enterprise cloud governance for AI workloads requires a structured framework that balances enablement (letting AI teams move fast) with control (maintaining security, compliance, and cost discipline). This framework defines the organizational model, policy layers, and operational practices needed.
Organizational Structure Cloud Center of Excellence (CCoE) - A cross-functional team responsible for defining governance policies, maintaining the cloud platform, and providing guidance to AI teams. The CCoE includes representatives from security, compliance, architecture, and finance.</description></item><item><title>EU AI Act Risk Classification Framework</title><link>https://ai-solutions.wiki/frameworks/eu-ai-act-risk-framework/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/eu-ai-act-risk-framework/</guid><description>The EU AI Act (Regulation 2024/1689) is the first comprehensive AI regulation worldwide. It classifies AI systems into four risk tiers and scales compliance requirements accordingly. The Act applies to any organization that develops, deploys, or distributes AI systems in the EU market, regardless of where the organization is headquartered. This framework document details the risk classification system, requirements per tier, and implementation timeline.
Risk Tier 1: Unacceptable Risk (Prohibited) Article 5 bans AI systems that pose an unacceptable risk to fundamental rights.</description></item><item><title>EU Cyber Resilience Act</title><link>https://ai-solutions.wiki/frameworks/cyber-resilience-act/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/cyber-resilience-act/</guid><description>The Cyber Resilience Act (CRA), Regulation (EU) 2024/2847, establishes mandatory cybersecurity requirements for products with digital elements sold in the EU market. It entered into force in December 2024 with most obligations applying from December 2027. The CRA is the first EU-wide horizontal legislation imposing cybersecurity requirements on hardware and software products, including AI systems distributed as products.
Scope and Relevance to AI The CRA applies to products with digital elements, defined as any software or hardware product and its remote data processing solutions.</description></item><item><title>GDPR Framework for AI and Machine Learning</title><link>https://ai-solutions.wiki/frameworks/gdpr-ai-framework/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/gdpr-ai-framework/</guid><description>The General Data Protection Regulation applies to any AI system that processes personal data of individuals in the EU, regardless of where the organization is based. GDPR was not written specifically for AI, but its principles create binding constraints on how machine learning models are trained, deployed, and maintained. Organizations building AI systems must understand where GDPR intersects with their ML workflows and what compliance requires in practice.
Lawful Basis for AI Data Processing Every use of personal data in an AI system requires a lawful basis under Article 6 of GDPR.</description></item><item><title>Global AI Regulatory Landscape</title><link>https://ai-solutions.wiki/frameworks/ai-regulatory-landscape/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/ai-regulatory-landscape/</guid><description>AI regulation is evolving rapidly across jurisdictions. Organizations developing or deploying AI globally must navigate an increasingly complex patchwork of laws, frameworks, and standards. This overview maps the current landscape as of early 2026.
European Union The EU has the most comprehensive AI regulatory framework globally.
EU AI Act (Regulation (EU) 2024/1689) - The world&amp;rsquo;s first comprehensive AI law, establishing a risk-based classification system. Prohibitions on unacceptable-risk AI practices applied from February 2025.</description></item><item><title>IEEE 7000 - Standard for Ethical AI Design Processes</title><link>https://ai-solutions.wiki/frameworks/ieee-7000-ethical-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/ieee-7000-ethical-ai/</guid><description>IEEE 7000-2021, officially titled &amp;ldquo;Standard Model Process for Addressing Ethical Concerns during System Design,&amp;rdquo; provides a systematic engineering process for identifying and addressing ethical concerns in autonomous and intelligent systems. Unlike high-level principles documents, IEEE 7000 specifies concrete process steps that engineering teams can follow to translate abstract ethical values into verifiable system requirements.
The Problem IEEE 7000 Solves Most organizations acknowledge that AI systems should be ethical, fair, and aligned with human values.</description></item><item><title>Inference-Time Scaling - Optimizing Reasoning at Inference Rather Than Training</title><link>https://ai-solutions.wiki/frameworks/inference-time-scaling/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/inference-time-scaling/</guid><description>Inference-time scaling refers to techniques that improve AI model performance by allocating more computation during inference (when the model processes a query) rather than during training. The core insight, demonstrated by research from OpenAI, Google DeepMind, and others in 2024-2025, is that for many tasks, spending more compute at inference time &amp;ndash; allowing the model to &amp;ldquo;think longer&amp;rdquo; &amp;ndash; can produce better results than training a larger model. This represents a fundamental shift in how AI capabilities are scaled.</description></item><item><title>Inverse Conway Maneuver for AI - Designing Teams to Shape Systems</title><link>https://ai-solutions.wiki/frameworks/inverse-conway-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/inverse-conway-ai/</guid><description>Conway&amp;rsquo;s Law states that organizations design systems that mirror their communication structures. If three teams build a compiler, you get a three-pass compiler. The Inverse Conway Maneuver deliberately designs the team structure to produce the desired system architecture. For AI organizations, this means structuring teams so that the AI systems they build have the right boundaries, interfaces, and ownership patterns rather than reflecting organizational accidents.
Conway&amp;rsquo;s Law in AI Organizations Conway&amp;rsquo;s Law manifests clearly in AI projects:</description></item><item><title>ISO/IEC 42001 - The First Certifiable AI Management System Standard</title><link>https://ai-solutions.wiki/frameworks/iso-42001/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/iso-42001/</guid><description>ISO/IEC 42001, published in December 2023, is the first international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system (AIMS) within organizations. Unlike guidance frameworks such as the NIST AI RMF, ISO/IEC 42001 is a certifiable standard: organizations can undergo third-party audits to demonstrate conformance, much as they do with ISO 27001 for information security or ISO 9001 for quality management.
Why a Management System Standard for AI Organizations adopting AI face a governance gap.</description></item><item><title>Jobs to Be Done for AI - Discovering AI Opportunities</title><link>https://ai-solutions.wiki/frameworks/jobs-to-be-done-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/jobs-to-be-done-ai/</guid><description>The Jobs to Be Done (JTBD) framework focuses on what a user is trying to accomplish (the &amp;ldquo;job&amp;rdquo;) rather than what they say they want (the &amp;ldquo;feature request&amp;rdquo;). Users do not want a chatbot; they want to find answers without waiting for someone to respond. Users do not want a classification model; they want to process incoming documents without reading each one manually. For AI projects, JTBD cuts through the technology hype and identifies use cases where AI delivers genuine value by doing a job better, faster, or cheaper than current alternatives.</description></item><item><title>KPI Framework for AI - Measuring AI Impact</title><link>https://ai-solutions.wiki/frameworks/kpi-framework-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/kpi-framework-ai/</guid><description>A KPI (Key Performance Indicator) framework for AI defines what to measure, how to measure it, and what the measurements mean for decision-making. Unlike OKRs, which set aspirational targets, KPIs provide ongoing operational visibility. For AI projects, a well-designed KPI framework answers three questions: Is the AI working technically? Is it delivering business value? Is it operationally healthy?
The Three Layers of AI KPIs Layer 1: Technical Performance KPIs These measure how well the AI system performs its core task.</description></item><item><title>Lean Startup for AI - Validated Learning with AI Products</title><link>https://ai-solutions.wiki/frameworks/lean-startup-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/lean-startup-ai/</guid><description>The Lean Startup methodology, developed by Eric Ries, focuses on validated learning through rapid experimentation. Build a minimum viable product (MVP), measure how customers respond, and learn whether your hypothesis is correct. For AI projects, Lean Startup addresses a critical risk: investing months in model development only to discover that the problem does not matter to users, the data does not exist at scale, or the business model does not work.</description></item><item><title>Medallion Architecture - Bronze, Silver, Gold Data Quality Layers</title><link>https://ai-solutions.wiki/frameworks/medallion-architecture/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/medallion-architecture/</guid><description>The medallion architecture is a data design pattern that organizes a data lakehouse into three progressive quality layers: bronze, silver, and gold. Each layer represents a different stage of data refinement, from raw ingestion to curated, business-ready datasets. The pattern was popularized by Databricks but is now used broadly across the data engineering community regardless of platform. It is particularly relevant for AI workloads because model quality depends directly on data quality, and the medallion architecture provides a systematic approach to ensuring that AI systems consume clean, validated, well-documented data.</description></item><item><title>Mixture of Experts - Routing Queries to Specialist Sub-Networks</title><link>https://ai-solutions.wiki/frameworks/mixture-of-experts/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/mixture-of-experts/</guid><description>Mixture of Experts (MoE) is a neural network architecture in which multiple specialist sub-networks (called &amp;ldquo;experts&amp;rdquo;) are combined with a routing mechanism (called a &amp;ldquo;gating network&amp;rdquo; or &amp;ldquo;router&amp;rdquo;) that selects which experts to activate for each input. The key insight is that not all parts of a model need to process every input. By activating only a subset of experts per token or input, MoE models can have very large total parameter counts while keeping the computational cost of processing any single input manageable.</description></item><item><title>Model Risk Management Framework</title><link>https://ai-solutions.wiki/frameworks/model-risk-management/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/model-risk-management/</guid><description>Model risk management is the discipline of identifying, measuring, and controlling the risk that arises from using quantitative models to make business decisions. In regulated industries, particularly financial services, model risk management is not optional. It is a supervisory requirement with specific expectations for how organizations develop, validate, and govern their models.
Origins and History The formal regulatory framework for model risk management originates from SR 11-7, &amp;ldquo;Guidance on Model Risk Management,&amp;rdquo; issued jointly by the Board of Governors of the Federal Reserve System and the Office of the Comptroller of the Currency (OCC) on April 4, 2011 [1].</description></item><item><title>MoSCoW Prioritization for AI - Must, Should, Could, Won't</title><link>https://ai-solutions.wiki/frameworks/moscow-prioritization/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/moscow-prioritization/</guid><description>MoSCoW is a prioritization technique that categorizes requirements into four groups: Must have, Should have, Could have, and Won&amp;rsquo;t have (this time). For AI projects, MoSCoW is particularly useful for managing scope in environments where stakeholders have expansive visions of what AI can do but delivery capacity and timelines are constrained. The explicit &amp;ldquo;Won&amp;rsquo;t have&amp;rdquo; category forces conversations about trade-offs that are often avoided.
The Four Categories Must Have - Requirements without which the AI solution has no value.</description></item><item><title>NIS2 Directive Compliance Framework</title><link>https://ai-solutions.wiki/frameworks/nis2-compliance-framework/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/nis2-compliance-framework/</guid><description>The NIS2 Directive (Directive 2022/2555) is the EU&amp;rsquo;s updated cybersecurity legislation that replaced the original NIS Directive. It establishes a unified legal framework for cybersecurity across 18 critical sectors and applies to essential and important entities operating in the EU. Member states were required to transpose NIS2 into national law by 17 October 2024, and enforcement is now active across the EU.
Scope: Essential and Important Entities NIS2 significantly expanded the scope of the original directive.</description></item><item><title>NIST AI Risk Management Framework - Govern, Map, Measure, Manage</title><link>https://ai-solutions.wiki/frameworks/nist-ai-rmf/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/nist-ai-rmf/</guid><description>The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, provides a voluntary, rights-preserving framework for managing risks throughout the AI system lifecycle. Unlike regulatory mandates, the AI RMF is designed to be flexible and usable by organizations of any size, in any sector, regardless of their stage of AI adoption. It has rapidly become the reference framework for AI risk management in the United States and has influenced policy discussions internationally.</description></item><item><title>OECD AI Principles - The International Foundation for Trustworthy AI</title><link>https://ai-solutions.wiki/frameworks/oecd-ai-principles/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/oecd-ai-principles/</guid><description>The OECD Principles on Artificial Intelligence, adopted in May 2019, were the first intergovernmental standard for responsible AI. Originally endorsed by 36 OECD member countries and subsequently adopted by the G20, the principles now have adherence from over 40 countries. They have become the foundational reference point for national AI strategies, regulatory frameworks, and corporate AI ethics policies worldwide.
The Five Principles The OECD AI Principles are organized into five value-based principles for the responsible stewardship of trustworthy AI.</description></item><item><title>OKR Framework for AI - Objectives and Key Results</title><link>https://ai-solutions.wiki/frameworks/okr-framework-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/okr-framework-ai/</guid><description>OKRs (Objectives and Key Results) connect aspirational goals to measurable outcomes. An Objective is a qualitative statement of what you want to achieve. Key Results are quantitative measures that indicate whether you have achieved it. For AI programs, OKRs bridge the gap between executive AI ambitions (&amp;ldquo;become an AI-driven organization&amp;rdquo;) and the concrete, measurable progress that engineering teams can deliver and leadership can track.
Why OKRs Work for AI Programs AI programs suffer from two measurement problems.</description></item><item><title>Prosci ADKAR for AI Adoption - Change Management for AI Transformation</title><link>https://ai-solutions.wiki/frameworks/prosci-adkar-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/prosci-adkar-ai/</guid><description>The Prosci ADKAR model is a goal-oriented change management framework that describes five sequential outcomes an individual must achieve for change to be successful: Awareness, Desire, Knowledge, Ability, and Reinforcement. Originally developed for general organizational change, ADKAR is particularly relevant to AI adoption because AI transformation is fundamentally a people challenge. The technology works; the difficulty is getting people to trust it, use it, and change their workflows around it.</description></item><item><title>Release Management - Cadences, Trains, and Versioning</title><link>https://ai-solutions.wiki/frameworks/release-management/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/release-management/</guid><description>Release management determines how software moves from development to production. For AI systems, this includes both application code and trained models, which have different lifecycles, different validation requirements, and different rollback characteristics. This framework covers release cadences, release trains, and semantic versioning automation.
Release Cadences The right release cadence depends on the system&amp;rsquo;s risk profile, testing requirements, and organizational maturity.
Continuous deployment pushes every merged change to production automatically. This works for application code backed by comprehensive automated tests but is risky for model releases where performance can only be fully validated in production.</description></item><item><title>Responsible AI Framework</title><link>https://ai-solutions.wiki/frameworks/responsible-ai-framework/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/responsible-ai-framework/</guid><description>The Responsible AI Framework provides a structured approach to building, deploying, and operating AI systems that are fair, transparent, accountable, safe, and privacy-preserving. It translates high-level responsible AI principles into concrete organizational practices, technical requirements, and governance processes.
Framework Pillars Pillar 1: Governance and Accountability AI governance structure - Establish clear organizational structures for AI oversight. This includes an AI ethics board or review committee, designated AI system owners for each production system, and executive-level accountability for AI outcomes.</description></item><item><title>RICE Scoring for AI - Quantitative Use Case Prioritization</title><link>https://ai-solutions.wiki/frameworks/rice-scoring/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/rice-scoring/</guid><description>RICE is a scoring framework developed by Intercom for prioritizing product features. It scores each initiative on four dimensions: Reach, Impact, Confidence, and Effort. The RICE score is calculated as (Reach x Impact x Confidence) / Effort, producing a single number that enables direct comparison across candidates. For AI use case prioritization, RICE provides a more structured alternative to gut-feel ranking while remaining simple enough to use in a workshop setting.</description></item><item><title>SAFe for AI - Scaling Agile in AI Programs</title><link>https://ai-solutions.wiki/frameworks/safe-for-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/safe-for-ai/</guid><description>The Scaled Agile Framework (SAFe) provides structure for coordinating multiple Agile teams working toward shared objectives. When an organization runs not one but five or fifteen AI initiatives simultaneously, SAFe&amp;rsquo;s portfolio, program, and team layers help align investment decisions, manage dependencies, and coordinate delivery across teams. This article covers how to adapt SAFe&amp;rsquo;s practices for the specific characteristics of AI and ML programs.
Why SAFe Becomes Relevant for AI Single-team AI projects rarely need SAFe.</description></item><item><title>Shift-Left Testing for ML Systems</title><link>https://ai-solutions.wiki/frameworks/shift-left-testing/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/shift-left-testing/</guid><description>Shift-left testing moves testing activities earlier in the development lifecycle, catching defects when they are cheapest to fix. In ML projects, this principle is especially valuable because late-stage failures are expensive: a bug in feature engineering discovered after a week-long training run wastes compute, time, and data science effort. Shift-left testing for ML applies TDD, contract-first design, static analysis, and early data validation to catch problems before they compound.
Why ML Projects Need Shift-Left ML projects have a unique failure cascade.</description></item><item><title>Software Quality Assurance for AI/ML Projects</title><link>https://ai-solutions.wiki/frameworks/software-quality-assurance/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/software-quality-assurance/</guid><description>Quality assurance for AI/ML projects requires a broader definition of quality than traditional software QA. Software QA asks &amp;ldquo;does it do what we specified?&amp;rdquo; AI QA asks that question plus &amp;ldquo;does the model perform well enough, on the right data, without bias, and does it continue to perform well over time?&amp;rdquo; This framework covers quality planning, metrics selection, and quality gates for AI/ML projects.
Quality Planning Quality planning for AI projects must address three distinct quality domains:</description></item><item><title>Software Requirements Engineering for AI Systems</title><link>https://ai-solutions.wiki/frameworks/software-requirements-engineering/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/software-requirements-engineering/</guid><description>Requirements engineering for AI systems diverges from traditional software requirements in a fundamental way: you cannot specify exact behavior. A classification model&amp;rsquo;s accuracy is a target, not a guarantee. A recommendation engine&amp;rsquo;s relevance is measured statistically, not deterministically. This framework covers how to adapt elicitation, analysis, and specification practices for systems where uncertainty is inherent.
Elicitation for AI Projects Traditional elicitation techniques (interviews, workshops, document analysis) still apply, but the questions change.</description></item><item><title>Stakeholder Mapping for AI - Managing Influence and Alignment</title><link>https://ai-solutions.wiki/frameworks/stakeholder-mapping-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/stakeholder-mapping-ai/</guid><description>Stakeholder mapping identifies everyone who influences or is affected by an AI project, assesses their position (supportive, neutral, resistant), and defines engagement strategies to build and maintain alignment. AI projects generate more stakeholder complexity than typical technology projects because they trigger concerns about job displacement, algorithmic fairness, data privacy, and organizational change. A stakeholder map makes these dynamics visible and manageable.
Why AI Projects Need Explicit Stakeholder Management AI projects fail for non-technical reasons more often than technical ones.</description></item><item><title>SWEBOK V4 Knowledge Areas Overview</title><link>https://ai-solutions.wiki/frameworks/swebok-v4-overview/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/swebok-v4-overview/</guid><description>The Software Engineering Body of Knowledge (SWEBOK) is an IEEE standard that defines the knowledge areas a software engineer should possess. Version 4, released in 2024, updates the body of knowledge to reflect modern practices including cloud-native development, DevOps, and machine learning engineering. Understanding SWEBOK V4 helps AI/ML teams ensure they are not neglecting foundational software engineering practices while focusing on model development.
Knowledge Areas Software Requirements Covers elicitation, analysis, specification, and validation of requirements.</description></item><item><title>TDSP: Microsoft's Team Data Science Process</title><link>https://ai-solutions.wiki/frameworks/tdsp/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/tdsp/</guid><description>The Team Data Science Process (TDSP) is Microsoft&amp;rsquo;s methodology for executing data science projects in collaborative team settings. While CRISP-DM provides a process framework, TDSP goes further by defining team roles, standardized project structures, infrastructure recommendations, and explicit integration with agile development practices. It was designed to address the common failure mode where individual data scientists build models that never make it to production because the handoff to engineering was never planned.</description></item><item><title>Team Topologies for AI - Organizing AI Teams</title><link>https://ai-solutions.wiki/frameworks/team-topologies-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/team-topologies-ai/</guid><description>Team Topologies, developed by Matthew Skelton and Manuel Pais, defines four fundamental team types and three interaction modes for organizing technology teams. The framework optimizes for fast flow of change by reducing cognitive load, clarifying team boundaries, and designing deliberate interaction patterns. For AI organizations, Team Topologies addresses the structural question that every scaling AI program faces: how to organize data scientists, ML engineers, data engineers, and platform engineers into teams that deliver effectively without creating bottlenecks.</description></item><item><title>Value Stream Mapping for AI - Identifying Waste and Opportunity</title><link>https://ai-solutions.wiki/frameworks/value-stream-mapping-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/value-stream-mapping-ai/</guid><description>Value Stream Mapping (VSM) is a lean manufacturing technique that visualizes the entire flow of materials and information from request to delivery. Each step is documented with its processing time, wait time, and quality rate. The map reveals where value is created and where waste accumulates. For AI projects, VSM serves two purposes: mapping the AI delivery process itself (how models move from concept to production) and mapping business processes to identify where AI intervention would eliminate the most waste.</description></item><item><title>Wardley Mapping for AI - Strategic Technology Positioning</title><link>https://ai-solutions.wiki/frameworks/wardley-mapping-ai/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/wardley-mapping-ai/</guid><description>Wardley Mapping, created by Simon Wardley, visualizes the components needed to serve a user need, positioned by their maturity (from novel to commodity). The map reveals strategic opportunities: where to build custom solutions (novel components), where to use managed services (product-stage components), and where to use commodities (utility-stage components). For AI strategy, Wardley Maps answer the critical build-vs-buy questions that determine where an organization invests its AI engineering effort.
How a Wardley Map Works A Wardley Map has two axes:</description></item><item><title>Waterfall for AI Projects - When Sequential Planning Works</title><link>https://ai-solutions.wiki/frameworks/waterfall-ai-projects/</link><pubDate>Sat, 28 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/waterfall-ai-projects/</guid><description>Waterfall methodology moves through sequential phases: requirements, design, implementation, testing, and deployment. Each phase must be completed and approved before the next begins. In the AI community, waterfall is often dismissed as incompatible with the iterative nature of ML development. This is partially true but ignores the reality that many enterprise AI projects operate in environments where waterfall is not a choice but a constraint: regulated industries, government contracts, and organizations with phase-gated governance.</description></item><item><title>AWS Well-Architected AI/ML Lens - Applying Best Practices to Machine Learning</title><link>https://ai-solutions.wiki/frameworks/well-architected-ai-ml-lens/</link><pubDate>Thu, 26 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/well-architected-ai-ml-lens/</guid><description>The AWS Well-Architected Framework covers principles that apply to any cloud workload. Machine learning introduces a distinct set of challenges - training pipelines, model drift, prompt injection, inference cost volatility - that the base framework does not fully address. The AWS Well-Architected ML Lens is a published extension that maps each of the six pillars to the ML lifecycle and provides ML-specific best practices.
Source: AWS Well-Architected ML Lens
What the ML Lens Adds The base Well-Architected Framework asks questions like &amp;ldquo;Do you have automated alerting?</description></item><item><title>The Well-Architected Framework - Why Every Cloud Provider Has One</title><link>https://ai-solutions.wiki/frameworks/well-architected-framework/</link><pubDate>Thu, 26 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/well-architected-framework/</guid><description>Every major cloud provider now publishes a Well-Architected Framework. AWS, Azure, and Google Cloud have each built their own version, and while the names and pillar counts differ slightly, the underlying logic is identical: cloud workloads fail in predictable ways, and a structured set of best practices can prevent most of those failures. This document explains what the framework is, where it came from, and why it matters especially for AI workloads.</description></item><item><title>Event Storming for AI Use Case Discovery</title><link>https://ai-solutions.wiki/frameworks/event-storming-ai/</link><pubDate>Wed, 25 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/event-storming-ai/</guid><description>Event Storming is a collaborative modelling technique invented by Alberto Brandolini. It uses coloured sticky notes on a long paper roll to map a business domain in a single room with a cross-functional group. For AI projects, Event Storming is a powerful discovery tool because it makes visible exactly where human judgment is currently applied in a process - and judgment is what AI can potentially automate.
Workshop Setup Duration: 3-4 hours for a focused domain.</description></item><item><title>Impact Mapping for AI Projects</title><link>https://ai-solutions.wiki/frameworks/impact-mapping-ai/</link><pubDate>Wed, 25 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/impact-mapping-ai/</guid><description>Impact Mapping was created by Gojko Adzic as a strategic planning technique to ensure that software delivery is linked to business outcomes. It structures the reasoning behind a product decision as a mind map with four levels: Why (business goal), Who (actors who can influence the goal), How (impacts on actors), and What (deliverables). For AI projects, Impact Mapping is a powerful antidote to technology-first thinking - the tendency to start with &amp;ldquo;we should use AI&amp;rdquo; rather than &amp;ldquo;we have a business problem.</description></item><item><title>AI Readiness Assessment - Is Your Organization Ready?</title><link>https://ai-solutions.wiki/frameworks/ai-readiness-assessment/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/ai-readiness-assessment/</guid><description>The most common reason AI projects stall is not the technology - it is organizational unreadiness. A team that starts building before assessing its foundations tends to discover blockers mid-project: data that cannot be accessed, a security review that adds six months, or executives who withdraw support when the first prototype does not match their expectations. This assessment is designed to surface those issues before they become blockers.
The Five Dimensions 1.</description></item><item><title>From AI Idea to Working Prototype in 3 Workshops</title><link>https://ai-solutions.wiki/frameworks/three-workshop-method/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/three-workshop-method/</guid><description>The 3-Workshop Method was developed by Linda Mohamed, AWS Community Hero and AI and Cloud Consultant, as a structured approach to taking enterprises from AI idea to working prototype.
Most AI projects fail not because the technology does not work, but because organizations skip the hard early conversations and jump straight to implementation. The Three Workshop Method structures those conversations into a repeatable process that ends with a working prototype and a team that understands what they built and why.</description></item><item><title>The Use Case Scoring Framework - From 57 Ideas to 3 Prototypes</title><link>https://ai-solutions.wiki/frameworks/use-case-scoring/</link><pubDate>Tue, 24 Mar 2026 00:00:00 +0000</pubDate><guid>https://ai-solutions.wiki/frameworks/use-case-scoring/</guid><description>This framework was developed by Linda Mohamed based on WSJF (Weighted Shortest Job First) principles adapted for AI use case prioritization across dozens of enterprise workshops.
When organizations run their first AI ideation workshop, they rarely leave with too few ideas. They leave with too many - Post-it notes covering three whiteboards, a shared document with 57 bullet points, and no clear path forward. The Use Case Scoring Framework exists to solve exactly that problem.</description></item></channel></rss>