
By Dr. Muhammad Ismail, AI Transformation Trainer, QBSLI Based on peer-reviewed research published in Technovation
Every healthcare organization today is under pressure to adopt AI. Boards are demanding it. Regulators are encouraging it. Technology vendors are selling it. Yet across hospitals and clinics, AI projects are stalling, underperforming, or being quietly abandoned by the very physicians they were designed to help.
Why?
A peer-reviewed study of 326 practicing physicians in Sweden, published in Technovation, one of the world's leading innovation journals offers a revealing answer. The problem is not the technology. The problem is how organizations are approaching transformation.
One of the most striking findings from the research is what we call Shadow AI: physicians are independently integrating AI tools into their clinical work such as for diagnosis, documentation, risk assessment, and patient communication without organizational knowledge, guidance, or support.
This is not a minor compliance issue. It is a signal that your formal AI strategy is lagging behind what is already happening on the ground.
The implication for any organization undergoing AI transformation is direct: if you do not build the infrastructure, culture, and policies for AI adoption, your people will fill the vacuum themselves leading to unpredictable results.
The research identified clear patterns across five domains that shape whether AI adoption succeeds or fails. Understanding these is the foundation of any credible AI transformation blueprint.
Here is what most AI transformation frameworks miss entirely: the drivers and barriers of AI adoption do not cancel each other out, instead they coexist, creating paradoxical tensions that leaders must navigate rather than resolve.
The research frames this through five paradoxes that operate simultaneously inside healthcare organizations:
| Domain | The Tension |
|---|---|
| Task | AI streamlines workflows AND creates new cognitive burdens |
| Individual | Physicians are curious and optimistic AND fearful and skeptical, often at the same time |
| Technology | AI is highly capable AND poorly integrated with legacy systems |
| Organization | Leadership wants AI adoption AND lacks the infrastructure to support it |
| Environment | Policy mandates push AI adoption AND legal/ethical frameworks constrain it |
This is the core insight for any AI transformation leader: you cannot eliminate these tensions. You must manage them. Organizations that treat adoption as a simple problem of "overcoming resistance" will keep failing. Those that build strategies which hold these tensions in balance will succeed.
Drawing on this research, here are the practical imperatives for any organization serious about AI transformation:
The gap between organizations that get AI transformation right and those that get it wrong is widening. The difference is rarely about which AI tools they purchased. It is about whether they understood the human, organizational, and ethical dimensions of transformation before they started.
The research evidence is clear: AI in healthcare is neither a simple efficiency upgrade nor an existential threat. It is a complex, paradox-laden transformation that requires leaders who can hold multiple truths simultaneously and build systems that are as thoughtful about people as they are about technology.
This article is based on peer-reviewed research by Dr. Muhammad Ismail, Henrik Barth, Magnus Holmén, Lena Petersson, and Luís Irgang, published in Technovation (2025), Halmstad University, Sweden. DOI: 10.1016/j.technovation.2025.103333
Dr. Muhammad Ismail is an AI Transformation Trainer at QBSLI, specializing in translating academic research on AI adoption into actionable strategy for organizations navigating digital transformation.
Discover more articles on leadership, transformation, and the future of work.
View All Articles