"If we do this well, we will not produce graduates who know less. We will produce physicians who know how to think better, with or without AI."
Ioulia Chatzistamou
Pathology has always been about interpretation. Long before digital slides and molecular panels, pathologists learned to recognize patterns and, more importantly, to explain what those patterns meant in real biological and clinical terms.
In undergraduate education, we traditionally managed this complexity by asking students to memorize large volumes of material, images, entities, and associations and to demonstrate recall under exam pressure. For decades, this approach made sense.
But generative AI changes the rules.
When images and information can be retrieved instantly, memorization no longer belongs at the center of pathology education. What becomes truly valuable is judgment: the ability to interpret findings, connect mechanism to context, and reason through uncertainty.
That shift is no longer theoretical. It plays out daily in how students learn. A medical student preparing for a pathology session no longer reaches first for a textbook. Instead, they open a chat window and ask for a summary of renal pathology or a differential diagnosis for a thyroid nodule. Seconds later, a polished, confident response appears. The interaction is efficient and convincing and, at that moment, the traditional educational contract of “memorize now, retrieve later” quietly breaks.
Higher-order skills
What replaces it is not the absence of knowledge, but a different relationship to it. Students’ work shifts from recall to discernment: deciding what to ask, trust, and challenge, and how to translate information into decisions that affect real patients. Knowledge still matters, but its role changes. It becomes the foundation for higher-order skills such as verification, reasoning, synthesis, and communication. In pathology, where subtle distinctions carry disproportionate clinical consequences, these skills are not optional. They are the profession.
In this sense, AI is not only disruptive; it is revealing. If a student can generate a reasonable explanation of prostate cancer grading in seconds, we are forced to ask an uncomfortable question: were we teaching understanding, or were we training recall? If a learner can produce a fluent differential diagnosis for a thyroid nodule without ever looking at a slide, what are our assessments actually measuring? AI does not replace pathology expertise; it exposes where education has relied too heavily on memorization as a proxy for thinking.
Across medical education, organizations such as the Association of American Medical Colleges have emphasized responsible, human-centered use of AI, with attention to transparency, privacy, equity, and ongoing evaluation. Similar themes appear in guidance from UNESCO and the World Health Organization. Taken together, these messages point toward a new core competency for our field: AI literacy as a natural extension of diagnostic literacy.
Meaningful questions
An AI-ready pathology student needs to know how to ask better questions. Not “What’s the diagnosis?” but “Which features truly separate papillary thyroid carcinoma from follicular-patterned lesions?” Or “What finding would make my leading diagnosis less likely?” Prompting, in this context, is not a technical skill. It is clinical reasoning made explicit.
Students must also learn to verify rather than trust. AI output should be treated like a junior colleague’s draft report: often useful, sometimes wrong, and never definitive. For example, an AI tool may confidently describe a pigmented skin lesion as melanoma based on cytologic atypia and irregular nesting. A student without a strong grounding in dermatopathology may accept the label at face value. A trained student pauses and asks: Is there architectural symmetry? Is there maturation with descent? Are mitotic figures appropriate for age and location? What features would argue against melanoma?
Equally important is learning to calibrate uncertainty. Pathology is probabilistic. We expect trainees to ask how certain a claim is, what evidence supports it, and what additional information would change the conclusion. AI rarely volunteers uncertainty unless prompted, which makes this habit of mind even more critical.
Students must also learn to detect biases and blind spots. Bias is not only demographic, but also structural. AI systems tend to overrepresent classic cases, rely on outdated classifications, or present conclusions with a confidence that exceeds the strength of the evidence. In prostate pathology, for instance, an AI-generated summary may describe Gleason scoring accurately while failing to emphasize contemporary reporting standards, tertiary patterns, or nuances of grade group interpretation. Without expert guidance, what sounds right can quietly replace what is right.
Deeper reasoning
Finally, students must learn to use AI as a cognitive tool rather than a shortcut. The goal is not speed, but deeper reasoning: constructing structured differentials, articulating mechanisms, and reflecting on uncertainty. These shifts inevitably change what matters in the classroom. If students can generate summaries instantly, lectures that primarily transmit information will feel increasingly misaligned with how learners work. Our value as educators shifts toward what AI cannot do well: curating what truly matters, modeling expert reasoning, teaching verification habits, and designing cases that require judgment rather than recall.
Pathology is uniquely positioned to lead here. We already train learners to hold competing hypotheses and integrate histology, clinical context, imaging, serology, and molecular data. AI can amplify this process if we design learning experiences intentionally. Consider thyroid fine-needle aspiration. Rather than asking students to list Bethesda categories, we can present an AI-generated interpretation and ask: What does this miss? Which cytologic features actually drive management decisions? What uncertainty remains, and how would you communicate it to a clinician? In this framing, AI becomes the starting point for reasoning, not its endpoint.
Smarter assessment
Recent pathology-focused commentaries, including those associated with the College of American Pathologists, reflect cautious optimism. They acknowledge the potential of generative AI for education while warning about overreliance, bias, and erosion of reasoning if guardrails are absent. Nowhere are these tensions more visible than in assessment. Assessment is where educational values become operational. If students can submit polished, AI-assisted work, then traditional take-home assignments and unsupervised quizzes no longer measure what we think they measure. The response should not be prohibition; it should be alignment.
In undergraduate pathology, that alignment favors observed oral explanations, brief “mini sign-out” moments, and in-class interpretive tasks, such as annotating slides and justifying differentials. It values case defenses framed as, “Here is my conclusion, and here is what would change it.” It requires transparency about AI use, with disclosure and verification treated as professional responsibilities. Assessment should reward how students think, not how fluently they transcribe.
A more provocative truth becomes unavoidable: we cannot teach what we have not practiced. The major obstacle to thoughtful AI integration is not student access; it is faculty readiness. Many educators are being asked to redesign assessments, manage policy concerns, and teach AI-literate reasoning without training, time, or shared standards. Guidance from the AAMC and UNESCO is clear: building human capacity and governance frameworks matters more than rushing adoption. Faculty do not need to become AI engineers, but we do need to become AI-literate educators.
Reasoning over recall
Without explicit guardrails, AI can quietly degrade education. Hallucinated facts, overconfident explanations, and unequal access all carry real risk. A workable stance for undergraduate pathology is therefore neither ban nor free-for-all. Expectations must be clear: AI use is permitted for learning and drafting within privacy rules; use must be disclosed; claims must be verified against primary sources; uncertainty and limitations must be articulated; assessments must prioritize reasoning over recall; and faculty must receive structured support.
Pathology has always taught medicine how to think, how to slow down, interrogate evidence, and resist premature certainty. AI does not threaten that mission. It makes it urgent. If we do this well, we will not produce graduates who know less. We will produce physicians who know how to think better, with or without AI.
Teaser image credit: Mac sourced from Adobe Stock
