Back
SEARCH AND PRESS ENTER
Recent Posts
Sam Greengard_2

Sam Greengard’s Interview

AI in Healthcare, Seen from the Outside

A technology journalist on incentives, equity, and why AI reflects the systems we build

By Chul S. Hyun, MD, PhD, MPH

Editor's Note

Editor’s Note

This interview opens the inaugural AI & HealthTech section of NexBioHealth. We begin with Sam Greengard not to forecast the future of medicine, but to step back from it. His perspective—shaped by decades of observing how digital systems enter complex institutions—offers a way to examine artificial intelligence in healthcare without hype or inevitability.

What follows is a conversation about how AI actually takes hold, what it amplifies, and why its impact depends less on algorithms than on the values embedded in the systems that deploy them.

AI doesn't fix broken systems-it accelerates them.

Q1. What feels fundamentally different about AI compared with earlier waves of technology?

A key difference between traditional digital technology and artificial intelligence is that AI is more than a new way to layer on efficiencies. If we look back through digital history—typewriters, personal computers, the internet, mobile phones—these inventions introduced shortcuts that allowed people to accomplish tasks faster and often better. However, they didn’t replace human thinking.

AI “thinks” like a human brain, and generative AI sounds like a real person. Increasingly, AI eliminates the need for a human to handle a task. The repercussions are enormous. Right now, most AI systems automate low-value tasks, but as the technology advances, it will increasingly complement and even replace humans. Although doctors and nurses aren’t going to become obsolete anytime soon, it’s critical to acknowledge that AI will drive fundamental and systemic changes in healthcare.

Q2. Many physicians feel AI is being “done to them” rather than built with them. How does AI actually enter real-world systems? 

AI remains in the early stages, and it is largely untested in many fields, including medicine. Right now, there’s a tendency for healthcare systems to push new AI solutions out to medical professionals—and for busy professionals to reflexively resist changes.

Physicians must be honest and open-minded. It’s easy to reject change simply because a new app or workflow is different. At the same time, it’s important to avoid thinking that AI is a fix-all. The technology shouldn’t replace human decision-making or put more distance between doctors and patients; it should serve as an assistant that can spot issues, provide second opinions, and streamline rote tasks.

Clinicians should take a proactive approach—providing honest feedback about what works and what doesn’t, and getting involved in committees and task forces so that clinical expertise guides AI alongside executives and CIOs.

AI shouldn’t replace judegement-it should support it.

Q3. What is one misconception about AI you most wish professionals would move past?

AI is extremely powerful, and it could profoundly change medicine in the years ahead. But it can’t fix broken processes and systems. Right now, healthcare in the U.S. demands fundamental reform.

As AI evolves and AI agents appear—systems that can automate complex tasks independent of humans—there’s a greater risk that things could go off the rails. With people’s lives at stake, it’s essential to reject the idea that AI is a utopian technology that will fix everything. If AI isn’t used wisely, it could reduce the quality of care and magnify existing inequities—benefiting the affluent at the expense of the poor.

This is where physicians must be vocal. They have an important role in shaping how AI is actually used.

Q4. Is the future of AI in healthcare more likely to be slow transformation or sudden disruption?

There has been a lot of hype, and there will be disappointments. But clinicians who do not adapt to AI will face enormous challenges. A common pattern with powerful technologies is uneven early adoption, followed by a tipping point once systems mature and reach scale.

You don’t need to become a technical guru, but you do need a basic level of understanding and proficiency. Staying informed is no longer optional.

Q5. Was there a reporting moment that sharpened your concern about AI and equity?

I don’t think there was a single “aha” moment. But one recurring theme in my reporting is that digital technologies often benefit certain groups—usually the privileged and affluent—while falling short for everyone else.

As a society, we can’t let profits and cost savings serve as the North Star for AI design and use. They should be only part of the equation. We must factor in ethics and outcomes. Do we want to make the world better—or simply make a few people running AI companies richer?

Profit should never be the North Star for AI in medicine

Closing Reflection

Greengard’s perspective is notable not because it rejects AI, but because it resists inevitability. Across his answers runs a consistent theme: technology reflects incentives. AI does not arrive as a neutral force—it inherits the values, priorities, and blind spots of the systems that deploy it.

For clinicians navigating AI’s growing presence, the takeaway is not that they must become futurists or technologists. It is that their role as stewards of care gives them a responsibility to question how AI is introduced, what problems it is meant to solve, and whose interests it ultimately serves. In healthcare, the future of AI will be shaped not only by algorithms, but by the human choices surrounding them.

Reading Technology Through Its Consequences

The Internet of Things (MIT Press)

An exploration of how connected systems quietly reshape industries—an idea that resonates as AI becomes invisible infrastructure in healthcare.

Virtual Reality (MIT Press)

A study of how immersive technologies alter perception and behavior, reinforcing a central theme in Greengard’s work: technology’s impact is defined less by novelty than by human adaptation.