Episode 58
Promise and Profit
Educators in Medicine,
In this newsletter, we continue our journey through the fundamentals of AI, its applications in medicine, and its transformative role in faculty development and education. Let’s dive into learning.
I am thankful for all the feedback I get from readers like you. Keep it coming!
Two pieces caught my attention this week that crystallize where we are with AI in medicine right now. One shows the technology delivering on its promise. The other reveals the business models that might undermine that promise.
Dr Hobbs, a great colleague of mine and pediatrician from an innovative group I am a part of (thanks X = Primary Care) said to us that AI is currently the “worst its going to be”. at the time, I thought wow! Sky is the limit! But while writing this, I had some pause. Perhaps larger models, faster answers, aren’t better? Except when they are…better.
When AI Actually Helps: The Mammography Story
Eric Topol’s latest piece makes a compelling case that we should be incorporating AI into all mammogram readings. Not as a replacement for radiologists, but as a consistent second reader that catches what human eyes miss.
The evidence is there. Multiple studies show AI can detect cancers that radiologists miss while also reducing false positives. In some European countries, AI second reading is replacing the traditional double-reading approach - with comparable or better outcomes and significant efficiency gains.
What strikes me about this application is how well it fits the clinical workflow. Radiologists already use computer-aided detection tools. They’re accustomed to considering multiple inputs before making a final call. The AI isn’t trying to replace judgment; it’s augmenting pattern recognition at a scale and consistency humans simply can’t match.
This is what good clinical AI looks like: narrow task, clear evidence, workflow integration, physician oversight. But how long till the human is the “check” after AI. And of course, then how long before another AI model is the check. Some encroachment is inevitable.
When AI Follows the Money: The ChatGPT Advertising Pivot
Then there’s OpenAI’s announcement that they’re bringing advertising to ChatGPT. I complained about them last episode too…may as well continue here
A New York Times opinion piece lays out what this really means. Sam Altman spent years positioning OpenAI as different - a benefit corporation, focused on safe AI development, not captured by the usual tech business models. That image helped secure both funding and public trust.
Now we’re getting ads.
I don’t think advertising in a chatbot is inherently evil. OpenEvidence does it too - yes - pharma pays for your search data! how else do you think it’s free? My dad was right- nothing is ever free. But I do think it matters when the same tool being pitched for medical education, clinical decision support, and patient communication is simultaneously optimizing for advertiser revenue.
The incentives shift. Suddenly there’s pressure to keep you engaged longer, to steer conversations toward monetizable topics, to collect more data about your interests and behaviors. These aren’t theoretical concerns - there is a long history of every major platform that’s gone this route.
For those of us working in medical education and healthcare, this creates real problems:
Trust erosion. We’ve been advocating for thoughtful AI adoption in clinical settings, assuring skeptical colleagues that these tools can be valuable when used appropriately. Harder to make that case when the same companies are pivoting this way. I am not against their monetization, just making a point here.
Data concerns. Healthcare conversations - even hypothetical ones used for education - contain sensitive information. What happens to that data in an advertising-driven model?
Content bias. Will clinical information be subtly shaped by what’s profitable to show? We’ve seen this play out with pharmaceutical advertising and search engines. No reason to think it won’t happen here.
Student and resident exposure. Medical trainees are heavy AI users, often without much guidance on privacy or appropriate use. They’re now training on platforms that are training on them. Thats already the case with OE, hence the signup including your NPI.
What This Means for Medical Education
Here’s my concern: we’re at a moment where AI could genuinely improve both medical practice and medical education. The mammography example shows it’s possible. But we’re making decisions about adoption while the business models are still forming - and those business models will shape what the technology ultimately becomes.
Stay discerning. As educators, we need to help trainees develop this discernment.
I remain optimistic about AI in medicine. The question isn’t whether to use AI in healthcare and medical education. That’s a harder conversation than either uncritical enthusiasm or blanket rejection. But it’s the conversation we continue (here at least) to have.
What do you think? Are we being thoughtful enough about the business models behind the AI tools we’re adopting in healthcare? I’d love to hear from physicians, educators, and trainees on how you’re navigating these questions..
BIG ANNOUNCEMENT COMING NEXT BLOG!
💌 As always, thanks for reading. Get in touch and let me know your thoughts!
Thank you for joining us on this adventure. Stay tuned for more AI insights, best practices, and more future editions of AI+MedEd.
For education and innovation,
Karim
Share this with someone - have them sign up here.

