Deploying an AI system is a technical milestone. Getting people to actually use it and trust its outputs is an organizational one. Most AI projects that fail to deliver value do so not because the model was inaccurate but because users never changed their workflows to incorporate it. Structured change management and deliberate training programs bridge this gap.

Origins and History

Change management as a discipline emerged from organizational psychology research in the mid-20th century. Kurt Lewin’s three-stage model (unfreeze, change, refreeze) published in 1947 laid the groundwork for understanding how groups adopt new behaviors [1]. John Kotter’s eight-step change model, introduced in his 1996 book Leading Change, provided a more granular framework widely adopted in enterprise transformations [2]. Jeff Hiatt founded Prosci in 1994 and developed the ADKAR model (Awareness, Desire, Knowledge, Ability, Reinforcement), which became one of the most widely used individual change frameworks in technology adoption [3]. As AI tools entered the workplace at scale from 2023 onward, these established change management frameworks were adapted specifically for the unique challenges AI presents: probabilistic outputs, trust calibration, and the need for users to develop judgment about when to accept or override AI recommendations.

Applying the ADKAR Model to AI Adoption

Awareness means ensuring users understand why AI is being introduced and what problem it solves. Without this, AI tools feel like management surveillance rather than productivity support.

Desire requires addressing the personal question every user asks: what is in it for me? Demonstrate time savings on tedious tasks. Show how AI handles the work people dislike, not the work they find meaningful.

Knowledge covers the practical skills users need. This includes prompt engineering basics, understanding confidence scores, knowing when the tool is likely to be wrong, and learning how to provide feedback that improves results.

Ability is the gap between knowing how and actually doing it in daily work. Hands-on workshops with real tasks from the user’s domain, office hours with AI champions, and sandbox environments for safe experimentation all help close this gap.

Reinforcement sustains adoption after the initial rollout. Recognize teams that integrate AI effectively. Share success stories. Continue iterating on the tool based on user feedback so people see that their input shapes the product.

Building Trust in AI Outputs

Trust is not binary. Users need to develop calibrated trust, knowing when to rely on AI and when to verify. Transparency features help: show confidence scores, provide source citations, and make it easy to see the reasoning behind a recommendation. Start with low-stakes use cases where errors are cheap. Let users build confidence before expanding to higher-stakes workflows.

Measuring Adoption

Usage metrics track whether people engage with the tool at all: daily active users, feature adoption rates, and session frequency. Low usage after rollout signals an awareness or desire problem.

Satisfaction scores capture qualitative experience. Net Promoter Score adapted for internal tools, regular pulse surveys, and feedback channels reveal friction points that usage data alone cannot surface.

Productivity impact measures whether AI adoption translates to business outcomes. Compare task completion times, error rates, or throughput before and after adoption. These metrics justify continued investment and guide prioritization of improvements.

Sources

  1. Lewin, K. “Frontiers in Group Dynamics.” Human Relations, 1(1), 1947. Introduced the unfreeze-change-refreeze model of organizational change.
  2. Kotter, J. Leading Change. Harvard Business School Press, 1996. Eight-step framework for organizational transformation.
  3. Hiatt, J. ADKAR: A Model for Change in Business, Government and our Community. Prosci Learning Center Publications, 2006. Individual change management framework.