QBS Leadership Institute
Back to News & Insights
Research & Thought Leadership

Why AI Transformation in Healthcare Fails, And What Leaders Must Do Differently

March 24, 2026
10 min read
Why AI Transformation in Healthcare Fails, And What Leaders Must Do Differently

By Dr. Muhammad Ismail, AI Transformation Trainer, QBSLI Based on peer-reviewed research published in Technovation

Every healthcare organization today is under pressure to adopt AI. Boards are demanding it. Regulators are encouraging it. Technology vendors are selling it. Yet across hospitals and clinics, AI projects are stalling, underperforming, or being quietly abandoned by the very physicians they were designed to help.

Why?

A peer-reviewed study of 326 practicing physicians in Sweden, published in Technovation, one of the world's leading innovation journals offers a revealing answer. The problem is not the technology. The problem is how organizations are approaching transformation.

The Uncomfortable Truth: Physicians Are Already Using AI, Without You

One of the most striking findings from the research is what we call Shadow AI: physicians are independently integrating AI tools into their clinical work such as for diagnosis, documentation, risk assessment, and patient communication without organizational knowledge, guidance, or support.

This is not a minor compliance issue. It is a signal that your formal AI strategy is lagging behind what is already happening on the ground.

The implication for any organization undergoing AI transformation is direct: if you do not build the infrastructure, culture, and policies for AI adoption, your people will fill the vacuum themselves leading to unpredictable results.

What Drives Physicians Toward AI, and What Pushes Them Away

The research identified clear patterns across five domains that shape whether AI adoption succeeds or fails. Understanding these is the foundation of any credible AI transformation blueprint.

What drives adoption:

  • Workflow efficiency — AI that automates documentation, summarizes patient records, and reduces administrative burden is welcomed because it solves a real daily pain point
  • Decision support — Tools that enhance diagnostic accuracy and flag risks physicians might miss are seen as genuinely valuable
  • Peer influence — When colleagues share positive experiences, adoption accelerates rapidly across teams
  • Organizational signals — Clear institutional direction and leadership commitment reduce hesitation and normalize AI as part of professional practice
  • Personal curiosity and optimism — Physicians who are involved in testing and development of AI tools develop trust and ownership of the technology

What creates resistance:

  • Legal and ethical ambiguity — Who is responsible when AI contributes to a misdiagnosis? Physicians will not commit to tools where liability is undefined
  • Data privacy concerns — Sensitive patient data flowing through AI systems, often with unclear ownership, creates real reluctance
  • Cognitive overload — Poorly designed AI that adds steps, generates excessive alerts, or complicates decisions is abandoned quickly
  • Organizational unpreparedness — Slow decision-making, lack of technical training, and absent IT infrastructure turn promising tools into frustrating obstacles
  • Techno-skepticism — Fear of deskilling, over-reliance, and loss of clinical judgment is real and must be addressed directly, not dismissed

The Hidden Complexity: AI Adoption is Full of Paradoxes

Here is what most AI transformation frameworks miss entirely: the drivers and barriers of AI adoption do not cancel each other out, instead they coexist, creating paradoxical tensions that leaders must navigate rather than resolve.

The research frames this through five paradoxes that operate simultaneously inside healthcare organizations:

Domain The Tension
Task AI streamlines workflows AND creates new cognitive burdens
Individual Physicians are curious and optimistic AND fearful and skeptical, often at the same time
Technology AI is highly capable AND poorly integrated with legacy systems
Organization Leadership wants AI adoption AND lacks the infrastructure to support it
Environment Policy mandates push AI adoption AND legal/ethical frameworks constrain it

This is the core insight for any AI transformation leader: you cannot eliminate these tensions. You must manage them. Organizations that treat adoption as a simple problem of "overcoming resistance" will keep failing. Those that build strategies which hold these tensions in balance will succeed.

What This Means for Your AI Transformation Strategy

Drawing on this research, here are the practical imperatives for any organization serious about AI transformation:

  1. Map your Shadow AI first. Before launching a new AI initiative, find out what your people are already using. You will almost certainly discover informal adoption that is either ahead of your strategy or creating risks you are unaware of.
  2. Invest in AI literacy before AI tools. The research shows that gaps in digital literacy, and not technology quality are the primary driver of failed adoption. Training must address both technical competency and practical judgment about when and how to use AI.
  3. Resolve the liability question publicly and explicitly. If your physicians do not know who is responsible when AI makes an error, they will default to avoidance. Clear policies on accountability are not a legal formality, instead they are a prerequisite for trust.
  4. Involve end users in the design and testing of AI systems. One of the strongest predictors of adoption in the research is whether physicians were involved in development. User involvement builds trust, surfaces practical problems early, and creates internal champions.
  5. Design for workflow integration, not workflow addition. AI that adds steps, requires verification of every output, or disrupts established clinical rhythms will be abandoned. The test for any AI tool is simple: does it make the physician's day easier or harder?
  6. Lead the culture, not just the technology. The research confirms that peer influence is a powerful adoption driver. Leadership should actively create forums for physicians to share experiences, celebrate early wins, and build collective confidence in AI tools.

The Strategic Opportunity for Your Organization

The gap between organizations that get AI transformation right and those that get it wrong is widening. The difference is rarely about which AI tools they purchased. It is about whether they understood the human, organizational, and ethical dimensions of transformation before they started.

The research evidence is clear: AI in healthcare is neither a simple efficiency upgrade nor an existential threat. It is a complex, paradox-laden transformation that requires leaders who can hold multiple truths simultaneously and build systems that are as thoughtful about people as they are about technology.

This article is based on peer-reviewed research by Dr. Muhammad Ismail, Henrik Barth, Magnus Holmén, Lena Petersson, and Luís Irgang, published in Technovation (2025), Halmstad University, Sweden. DOI: 10.1016/j.technovation.2025.103333

Dr. Muhammad Ismail is an AI Transformation Trainer at QBSLI, specializing in translating academic research on AI adoption into actionable strategy for organizations navigating digital transformation.

Explore More Insights

Discover more articles on leadership, transformation, and the future of work.

View All Articles