Episode 65
The Notebook and the Playbook
Educators in Medicine,
In this newsletter, we continue our journey through the fundamentals of AI, its applications in medicine, and its transformative role in faculty development and education. Let’s dive into learning.
Two things landed in my inbox this month from Google — almost on the same day, from two different teams, which was funny. One was a product announcement: a new feature called Notebooks in Gemini that ties their chat assistant and NotebookLM into a single workspace. The other was a white paper: The AI Skills Playbook, a five-step guide to building AI fluency in a workforce.
One is a tool. The other is a framework.
But I read them back-to-back and I ended up thinking they belong together. Tools without skill-building are useless. If you want to grow AI fluency in your learners — or in yourself — this year, you need both. This week, Google happened to give us one of each.
🗂️ Notebooks in Gemini — The Blue Binder, Digitized
Announcement: “Notebooks in Gemini: Your space for ideas, powered by NotebookLM.” Google Blog. April 2026. Read it here → blog.google
Last week, between clinics near a stack of unread medical journals, I pulled up Gemini on my phone to ask a quick question about an outpatient CKD workup. I had to scroll past seven unrelated chats to find the conversation I’d started the day before on TB. By the time I found it, the next patient was ready and I’d lost the thought.
Anyone who leans on an AI assistant in a professional life knows this problem. Chats pile up like unsigned notes— clinical questions, board review, trip planning, lecture outlines. Nothing is organized the way your mind is organized.
Google just launched something small but genuinely useful. Notebooks in Gemini creates a dedicated project space that:
Lets you pin documents, PDFs, and transcripts to a single workspace
Accepts custom instructions specific to that project
Syncs with NotebookLM, so the same source list travels between the chat assistant and the research tool
Can consolidate past chats into one notebook so you don’t lose the thread
This is akin to Claude’s projects which I’ve been using regularly.
Think of it as the old binder on your shelf — the your didactic notes from years ago as a student, and your favorite algorithms printed out — except it lives inside the assistant you’re already talking to, on the phone in your pocket.
Where I think this lands in a medical life:
A notebook per rotation. The intern on inpatient medicine: one notebook with your team’s DVT prophylaxis references, the hospital’s antibiogram, and a running list of things you learned on rounds. Three months later when you rotate back, you’re not starting from zero.
A notebook per topic you teach. When I sit down to prepare a talk on AI in medical education, I’d rather work in a single space that holds the relevant papers, old decks, and a transcript of the last conversation I had with a colleague. Not scattered across my downloads folder and four email threads.
A notebook per patient panel you keep thinking about. I have a handful of patients who live in the back of my head — complex polypharmacy, HFpEF with comorbid everything, the kind of medicine where the differential is long and the social history is longer. A notebook I can come back to, with guidelines and my own dictation, means that when I’m prepping for Monday’s follow-up the assistant is actually useful instead of generic.
One caveat worth knowing: at launch, Notebooks in Gemini is rolling out on paid tiers (AI Ultra, Pro, Plus) on web, with mobile and free access coming in waves. It’s also not yet available on Workspace or Education accounts — so if your institution gave you a Workspace login that governs your Gmail and calendar, wait a bit.
My takeaways:
Start with one notebook, not ten. Pick the rotation, topic, or project you’re actively thinking about this month
Write a one-paragraph custom instruction: who you are, what you want back (evidence-grounded, guideline-cited, flag low-certainty claims)
The sync with NotebookLM means your sources travel with you — from desk research to a bedside question
This is not a replacement for reading the chart. It’s a better place to put the scaffolding around your thinking
The quiet productivity tools — the ones that just help you remember what you already know — are the ones that compound.
🎓 The AI Skills Playbook — Translated for Medical Learners
White Paper: “The AI Skills Playbook: Your guide to building an AI-ready workforce.” Google Cloud. 2026. Read it here → services.google.com
Google Cloud’s new playbook is a five-step guide aimed at business leaders. I read it and kept thinking: most of this works — almost word for word — for a program director, a clerkship director, a chief resident, or a learner taking their own training seriously this month. It deserved a translation to medicine…
1. Establish your baseline.
The playbook says survey your team, figure out where they are, pick metrics for success. For a learner, the baseline is personal. Open your browser history from the last week. How are you actually using AI today?
2. Put AI where you already work.
This is the clinically important one. The playbook points out that adoption fails when AI is a separate workflow, and thrives when it lives inside the tools you already use. For residents, this is everything. “Go learn AI” as a separate curriculum will not survive contact with a post-call day. AI skills grow when they solve a friction you’re feeling right now — drafting the discharge summary, composing a letter to a patient, explaining a diagnosis to a family. Meet learners where the work already is, or accept that the initiative will quietly die. Google notes that 75% of practitioners prefer hands-on AI training. That tracks with every learner I’ve supervised.
3. Build a framework for trust.
The playbook’s third step is responsible AI — principles, guidelines, governance. In medicine, we have something to teach industry on this one. Clinical judgment is the discipline of knowing when your instrument is reliable and when it isn’t. AI is another instrument with a confidence interval. My framework for trainees is three questions asked every time they use an AI output:
Is this a task where a plausible-sounding wrong answer is dangerous?
Can I verify the key claim in under a minute?
Would I be comfortable if the patient, my attending, or the state medical board saw exactly how I used this?
Once these questions become habit, they don’t slow anyone down. They’re just the clinical version of trust, but verify.
4. Make learning flexible.
The playbook says adults learn differently — some in long sessions, some in 10-minute bursts. For a medical learner, flexible learning isn’t aspirational; it’s the only kind available. One new prompt pattern a week. One real task you automated and refined by hand before it left your screen. I’ve watched learners try to carve out dedicated “AI time” and fail for months. The ones who actually get fluent do it on the fly — a hundred small times, on real work.
5. Celebrate every milestone.
For Google, this step is about certification and badges. For a medical learner, it’s something more formative: share what you figured out. Present a case at noon conference on how you worked through a differential with an AI tool — and where it misled you. Write a one-pager for your residency on your favorite prompt patterns. Submit a workshop to your specialty’s annual meeting. Teach the intern what you learned last month.
There’s truth in the old teaching that the one who teaches learns twice. In medicine, explaining has always been how knowledge calcifies. AI fluency will be no different — and we’ll need each other to develop it. No one has written the textbook for this yet. We are the first cohort. The path gets made by the ones who walk it and leave notes for the ones coming behind.
💌 As always, thanks for reading. Get in touch and let me know your thoughts!
Thank you for joining us on this adventure. Stay tuned for more AI insights, best practices, and more future editions of AI+MedEd.
For education and innovation,
Karim

