AI in Hospitals Is Not a Technology Challenge. It’s a Leadership One

Radiology is where most people notice AI in hospitals.

It’s also where many leaders stop thinking about it.

That’s a mistake because the real transformation isn’t a single algorithm spotting a shadow on an MRI faster than a human. The transformation is operational. It’s governance. It’s procurement. It’s workforce psychology. It’s data residency. It’s what happens when a hospital shifts from “pilot culture” to “production culture” with tools that evolve every two weeks.

I recently spoke with Antonín Hlavinka, Deputy Director for Innovation and Digitalization at University Hospital Olomouc, one of the largest hospitals in the Czech Republic. His day job sits at the uncomfortable intersection of medicine, IT, regulation, and reality. He also teaches future physicians about AI, eHealth, mHealth, cybersecurity, assistive technologies, and interoperability because the next decade of healthcare won’t be won by people who can’t speak both languages.

What struck me wasn’t a list of tools. It was the pattern behind the adoption.

If you want to understand where AI in healthcare is actually heading, start here: AI in hospitals isn’t a feature. It’s becoming a system.

COVID didn’t “accelerate digital.” It rewired expectations.

Before COVID, resistance to telemedicine and digital workflows had a predictable shape: clinicians were cautious, patients were hesitant, and institutions moved at the speed of committee meetings and public procurement.

Then COVID hit, and the dividing line became obvious. Antonín described it as “before COVID” and “after COVID.” That framing matters.

Because after COVID, something fundamental changed: people stopped debating whether digital care belonged in hospitals and started asking why it wasn’t already there.

Hospitals that had been building capabilities suddenly had demand pulling them forward. Not politely urgently. Clinicians who previously avoided new systems began requesting them. Patients who wanted face-to-face as the default began accepting remote interactions as normal.

And the institutions that hadn’t invested early found themselves in the worst possible position: not just behind, but behind while demand is spiking.

That’s what a true inflection point looks like. Not a keynote. Not a strategy deck. A shift in what people expect to be possible.

The AI conversation is split into three realities, and only one is easy

Most AI discussions in healthcare collapse everything into a single narrative: “AI will help doctors.” That’s vague enough to be harmless and useless enough to be comforting.

Antonín’s framing is far more practical:

  1. Administrative AI

  2. Clinical AI

  3. Certified vs. experimental clinical AI

That last distinction is where most optimism goes to die or where seriousness begins, depending on your temperament.

In the administrative domain, hospitals can deploy tools that reduce friction: copilots, document workflows, contract and agreement validation, and marketing asset generation. It’s not glamorous, but it’s where time leaks happen every day. And because the risk profile is different, adoption can be faster.

Clinical AI is different. Clinical AI touches diagnosis, triage, treatment decisions, and outcomes. That means it drags regulation into the room immediately, and regulation doesn’t care about your demo.

Antonín was blunt about certification: broad, general-purpose models are difficult, maybe impossible to certify in the way medical devices are certified, because you must prove what the model was trained on, how it behaves, and how it will behave tomorrow. With general models, tomorrow is the problem.

Narrow AI is more certifiable because you can control the dataset, define the purpose, run trials, and align with medical device regulation standards. But then you hit the next hard truth:

Certification can take three to five years. By the time you finish, your model may already be obsolete.

This is the trap healthcare leaders need to internalize: the certification process is built for a world where tools change slowly. AI evolves fast. That mismatch is not a footnote; it’s a governing constraint.

So the question becomes: how do you innovate inside that constraint without gambling with patient safety or legal exposure?

The answer isn’t “move faster.” The answer is build an adoption machine.

The hospital that scales AI builds governance, not hype

In Olomouc, they didn’t try to solve AI adoption with enthusiastic clinicians alone. They established an innovation committee with the people you need in the room if you want to move beyond pilots:

  • clinical professionals

  • lawyers

  • a DPO (data protection officer)

  • IT leaders

  • innovation specialists

  • biomedical engineers

Then they created a system where clinicians can propose tools to test or adopt, and the committee evaluates viability: KPIs, cybersecurity, data governance, cost, goals, and crucially, what “success” means before the pilot starts.

This is the grown-up version of innovation.

Because in most hospitals, “pilot” becomes a synonym for “interesting.” And “interesting” becomes a synonym for “we’ll never scale this.” Public procurement alone can kill momentum if you don’t already know exactly what you want, how to specify it, and how to defend it.

Antonín described the failure mode clearly: not that AI doesn’t work, but that procurement and public-sector dynamics can stall projects due to pricing disputes, narrow specifications, or conflicts of interest.

This is why the Proof of Concept stage is not optional theater. It’s a survival mechanism.

If you can’t run disciplined POCs with defined KPIs, you can’t responsibly adopt AI at scale.

Why secure infrastructure matters more than your favorite model

There’s another part of this story that many executives underweight: where your data goes.

In Olomouc, they use administrative AI tools, but they also have agreements for access to advanced language models with guarantees around European server infrastructure and assurances that their data is not used to train future models.

That’s not a technical detail. That’s a prerequisite.

In Europe, GDPR, cybersecurity requirements, and emerging AI regulation are not “red tape.” They are a reality that forces institutions to be explicit about privacy, security, and governance. It slows things down, but it also forces trustworthiness into the design.

And trust is the currency of healthcare.

Hospitals don’t get to behave like consumer apps. A “move fast and break things” culture is malpractice when the thing you break is a patient.

So if your AI strategy starts and ends with “which model should we use,” you’re missing the point. The model is an ingredient. The recipe is infrastructure, governance, and integration.

The most convincing AI cases aren’t futuristic. They’re painfully concrete.

I don’t need speculative visions to believe AI will change healthcare. I need real cases where time compresses, and outcomes change.

Two examples from Olomouc made the point.

In one case, clinicians and biochemists struggled to identify a diagnosis for two months. This isn’t an abstract inefficiency; it’s two months of uncertainty for a patient, two months of resource consumption, two months of emotional wear.

They entered the case into a new GPT model and got the correct diagnosis in 22 seconds, ranked first.

If you’re a clinician, that doesn’t feel like “AI support.” It feels like a different era.

In another case, a radiology AI system recognized subtle marks, signals that a human could miss or interpret later. That earlier detection changed the patient’s trajectory. That’s not productivity. That’s lifesaving.

These aren’t stories about replacing doctors. They’re stories about amplifying the best doctors and raising the floor for everyone else.

And that’s where the real strategic conversation begins.

AI won’t fix the workforce shortage… unless you redesign the journey

Europe has a shortage of physicians and an even more acute shortage of nurses. Everyone in healthcare knows this, and everyone is tempted by the idea that AI can “solve” it.

But I’m skeptical of magical thinking. AI doesn’t create humans. It creates leverage.

The most practical leverage point Antonín described wasn’t a robot doctor. It was a redesigned front-end of care: a symptom-checker flow patients can use while waiting, via a QR code in the ambulatory setting, producing a structured report for the physician before the consultation even starts.

That’s not flashy. It’s transformative.

Because it moves cognitive load upstream. It turns wasted waiting time into data collection and pre-structuring. It reduces repetitive questioning. It helps clinicians focus on edge cases rather than rederive the same baseline history for the hundredth time.

If you want AI to help with workforce pressure, this is the play: reduce friction, reduce redundancy, structure information earlier, and shorten cycles.

But you can’t do that without addressing the emotional reality: fear.

The real resistance isn’t technical. It’s existential.

Antonín said something that every hospital leader should write on a whiteboard:

When you introduce AI into administration, accounting and workflows, people fear losing their jobs, so they fight it.

When you introduce AI into clinical work, some clinicians fear being outperformed, so they resist it.

That’s not irrational. It’s human.

And pretending it won’t happen is leadership negligence.

The response cannot be to shame people into acceptance. It has to be to design adoption in a way that preserves dignity, builds competence, and makes the boundary clear: AI is a tool, not a verdict on your worth.

At the same time, we shouldn’t lie about the trajectory. Antonín referenced examples where institutions are already experimenting with AI agents in governmental roles. Whether or not those experiments are wise, they signal direction: some leaders will try to replace roles.

In healthcare, the more realistic near-term path is not full replacement. It’s a role transformation.

The clinician who learns to interrogate models, validate outputs, and integrate AI into practice becomes more powerful. The clinician who refuses may still be competent but increasingly constrained by time and workload.

AI won’t eliminate the need for doctors. But it will absolutely reshape what being a good doctor looks like.

Education is where the long-term battle is already being decided

One of the most revealing moments came when Antonín talked about using AI at home with his daughter to learn mathematics.

Two hours of guided geometry practice examples generated, solutions explained, and, most importantly, explanations rewritten in different ways until understanding clicks.

“Explain it to me like I’m a child” is not a gimmick. It’s personalized pedagogy on demand.

I share the concern I raised: if kids use AI as a substitute for thinking, cognitive effort can decline. But Antonín’s analogy is worth taking seriously. The internet changed how we remember. GPS changed how we navigate. We didn’t become less human; we became different humans.

The variable isn’t the tool. It’s the goal.

If you don’t know what you’re trying to achieve, a model will happily generate an answer that sounds plausible and leads nowhere. If you do know your goal, AI compresses the path.

This matters for healthcare because medical education is about to fork:

  • Students trained to use AI responsibly as decision support

  • Students trained to pretend AI doesn’t exist until it’s unavoidable

One group will graduate fluent in the new reality. The other will graduate into it unprepared.

National competitiveness isn’t about brilliance. It’s about environment.

Czech Republic has talent. What it lacks, by Antonín’s account, is a sufficiently supportive ecosystem: laws, initiatives, funding structures, and administrative frameworks that make innovation easier rather than harder.

He contrasted this with places where fewer constraints create faster experimentation (even if that comes with future risk). He also pointed to the European Union realizing it must become more competitive and supportive of innovation ecosystems.

This is the strategic layer most hospital AI conversations ignore: your ability to adopt AI is bounded by your national and regional environment.

That’s why structures like a digital innovation hub matter. In Olomouc, they’ve built a “front door” for startups and larger companies, a place to test, pilot, find funding, and connect with clinicians while ensuring cybersecurity, governance, and, when needed, clinical trials.

Hospitals that want to be leaders in AI adoption will have to become something they weren’t designed to be: test beds with industrial-grade governance.

What the next five to ten years actually look like

It’s tempting to jump straight to the cinematic version of the future: AI-only hospitals, robots in corridors, agents running departments.

Antonín mentioned reports of a highly confidential AI-driven hospital model in China as an early signal of where experimentation is heading.

Will patients accept it? My view aligns with his generational insight: acceptance depends on who the patients are.

Many of us still value human contact as part of healing. But Gen Z and Gen Alpha have grown up differently, more screen-native, often less comfortable with physical interaction, more used to digital interfaces as primary relationships.

They will shape demand.

So I do think we’ll see “agentic” clinical systems emerge rapidly first as supervised tools, then as semi-autonomous workflows, and eventually as AI-first care settings with human oversight.

Not because it’s philosophically appealing, but because the incentives are relentless: higher precision in narrow tasks, lower cost, faster throughput, and the brutal math of workforce shortages.

The Rubicon has been crossed. Not by ideology. By utility.

My takeaway: the winners won’t be the hospitals with the best AI demos

They’ll be the hospitals that build:

  • Secure data pathways and trustworthy infrastructure

  • An operating model for POCs and procurement

  • Clear KPIs tied to clinical and operational outcomes

  • Governance that includes clinicians, legal, privacy, IT, and engineering

  • A culture that treats AI as a competence, not a threat

AI in healthcare is not a single revolution. It’s a thousand redesigns, some clinical, many administrative, all interconnected.

And if you’re waiting for a perfect, fully certified, future-proof model before you move, you’re already late. Not because you lack ambition but because the world moved from “should we?” to “why aren’t we?”

The most serious hospitals aren’t asking whether AI belongs. They’re building the muscle to adopt it safely, prove value, and scale what works.

That’s the difference between experimenting with AI and becoming an AI-enabled institution.

Dr. Peter M. Kovacs

Previous
Previous

The Hard Way Is the Only Way: What Alzheimer’s Care Taught Me About AI, Trust, and Real Innovation

Next
Next

The Last Sense We Lose: Why Voice, Dignity, and Human Connection Will Define the Future of Aging