Episode 62
Why AI Makes You Feel Better (And Why That’s Complicated)
Educators in Medicine,
In this newsletter, we continue our journey through the fundamentals of AI, its applications in medicine, and its transformative role in faculty development and education. Let’s dive into learning.
This week I want to talk about two articles that landed in my reading pile almost back-to-back and ended up speaking to each other in a way I didn’t expect. One is about physician well-being. The other is about training residents in the age of AI. On the surface — different topics. But at the heart of both is the same uncomfortable question: when we let AI take something off our plate, what exactly are we giving up, and what are we getting back?
I was part of a panel on AI a few weeks ago and this was said (paraphrased) “Every skill I’m AUGMENTED by with AI, something is AMPUTATED”. This left a strong taste in my mouth, that I cannot really do more, like using a GPS, without losing some navigation skill.
The answer, it turns out, is not as simple as “more time = better life” or “less struggle = better learning.” Both papers push back on those easy equations. And both are worth your time.
🧠 1: Comfort vs. Meaning — The Well-Being Equation We Keep Getting Wrong
Article: “Meaning and Comfort in Physician Well-Being: An Integrated Framework” Tung MG, Palamara K, Ripp JA, Saddawi-Konefka D. JAMA. March 17, 2026; Volume 335, Number 11. Read it here → doi:10.1001/jama.2026.0472
I used to think burnout was primarily a logistics problem. Fix the inbox, cut the prior auths, get me home by 5:30, and I’ll be fine. And honestly? For a while, some of that works. But this JAMA Viewpoint from a Harvard/Mount Sinai team pointed out: comfort is not the same as fulfillment, and chasing one without the other is a trap.
The authors use two philosophical concepts — hedonia (the pursuit of comfort, reducing friction) and eudaimonia (the pursuit of meaning, purpose, excellence) — and argue that medicine has been lurching between them like a pendulum, when what we actually need is both at once.
To be honest, I don’t know if I agree. my personal conviction and stance is firmly rooted in the practice of medicine as a service. To come at it with hedon-istic goals puts me at the center of the act, which in itself is broken. When I live in a Karim-centric world, yes everything end in burnout.
The piece draws on a psychological concept called hedonic adaptation — the human tendency to return to a baseline happiness level regardless of what changes around us. Lottery winners, people who experience serious injury — over time, both groups return to roughly where they started emotionally. The same likely applies to that raise you negotiated or the schedule change you finally got approved. Real, yes. Lasting? Probably not.
Their take on AI scribes is the part that stuck with me. They write that AI documentation tools “improve well-being disproportionate to the actual time savings” — suggesting it’s not just the 30 extra minutes at home that matters. It’s the cognitive bandwidth restored during the visit itself. When I’m not mentally composing the note while the patient is still talking, I’m actually present. The scribe gives back the encounter.
That’s the integrated framework they’re proposing: evaluate every well-being intervention by whether it reduces friction AND restores meaning. Not one or the other.
My takeaways:
Chasing comfort alone creates a transactional relationship with medicine — patients become tasks, work becomes just a job
Meaning requires bandwidth. You can’t access purpose when you’re drowning in administrative overreach and moral distress
The best well-being interventions do both — and we should request that framing from our institutions
Individual physicians have agency: deliberately reflect on meaningful interactions, find your clinical ‘why’, and protect your recovery time so you can show up fully
A line worth sitting with:
“Systems must be built that provide enough comfort to make the work sustainable and preserve enough meaning to make the work worthwhile.”
That’s a sentence I want hung in admin offices...
🎓 Section 2: The AI Scribe and the Missing Struggle
Article: “Supervising Resident AI Use Without Losing the Learning” Preiksaitis C. Journal of Graduate Medical Education. February 2026. Read it here → doi:10.4300/JGME-D-25-01133.1
The paper opens with a scenario that will be instantly recognizable to anyone who teaches:
“The note has everything,” they say. “I’m still…putting it together.”
A second-year resident. Hypotensive patient. AI scribe running in the background. The note is perfect. The resident’s reasoning? Still coming together.
This is the tension at the heart of medical education right now. Health systems want efficiency — shorter notes, faster throughput, fewer clicks. Trainees need productive struggle — the messy, uncertain, sometimes uncomfortable process of building clinical reasoning from scratch. These two goals are not always friends.
I was taught about this concept of 'friction’ as what is needed for progress. these are needed for learning.
Carl Preiksaitis, a Stanford emergency medicine physician and educator, makes a thoughtful case that the answer isn’t to ban AI — it’s to be deliberate about what we protect as human-only cognitive work. His list is clear: first-pass differential diagnosis, initial problem representation, and the assessment and plan. These are the mental reps that build clinical expertise. If AI takes them over routinely, we risk what he calls “never-skilling” — residents who never develop robust mental models because they never had to build them.
The practical framework he introduces is called DEFT-AI, adapted from Abdulnour et al. (NEJM, 2025) — a quick clinical (which we’ve discussed here before) supervision tool that takes under two minutes:
D — Diagnose the AI moment (”Did you use any tools to help with this case?”)
E — Explore the inputs (”What did you tell the AI? What does it not know about this patient?”)
F — Feedback on reasoning (”Tell me your plan as if the AI weren’t here.”)
T — Teach verification (”How would we check this? What if it’s wrong?”)
AI — Advice on future use (”Next time, form your own plan first, then use AI as a double-check.”)
This is elegant. It doesn’t punish AI use and illuminates the resident’s thinking around the AI. The note becomes a teaching artifact. The AI suggestion becomes a prompt for critical appraisal. Naturally, it needs time.
I especially appreciated his note that the AI-generated note can still be educational if residents are asked to find where it misrepresented their thinking — a beautiful reframe. Catching the AI’s mistake requires the resident to articulate why the detail matters clinically.
My takeaways:
AI literacy is now an educational obligation, not optional enrichment — residents will use these tools when they graduate; we need to teach them how
Protect differential diagnosis as a human-first task across every specialty
Use DEFT-AI as a low-lift, high-yield supervisory habit — one question per encounter, consistently asked
The AI-generated note isn’t the end of learning; it’s the beginning of a different kind of conversation
A line that impacted how I think about this:
“If we are deliberate, AI can help strip away low-yield busywork and refocus training on the most human parts of medicine: working through uncertainty, caring for patients, and reflecting on our own thinking.”
That’s not a threat to medical education. That’s actually a vision for it.
🔗 The Thread That Connects Both Papers
Here’s what struck me when I read these back-to-back: both papers are fundamentally about the same thing.
The JAMA piece argues that AI scribes improve well-being not because of time saved, but because of cognitive space restored — space for presence, connection, and meaning during the encounter.
The JGME piece argues that AI scribes risk harming training not because of the tool itself, but when they remove the cognitive struggle that builds clinical reasoning.
Same tool. Same mechanism (offloading cognitive work). Opposite implications — depending on who is using it, and why.
For the seasoned attending — cognitive offloading can be a gift. For the second-year resident still building their mental models — it can quietly steal the struggle they need.
This is why blanket policies (”everyone uses scribes” or “no residents use AI”) miss the point. Believe it or not, I’ve heard of ENTIRE health systems banning AI…yes in 2026! Context matters. Stage of training matters. Intentionality matters.
As physicians and educators, we’re living in a moment that requires us to hold both of these truths at once: AI can restore meaning to practice and we must protect the human work of becoming a physician. These aren’t contradictions. They’re both parts of the same calling.
💌 As always, thanks for reading. Get in touch and let me know your thoughts!
Thank you for joining us on this adventure. Stay tuned for more AI insights, best practices, and more future editions of AI+MedEd.
For education and innovation,
Karim
Share this with someone - have them sign up here.

