OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 02:56

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Learning to Work With Artificial Intelligence as Part of Clinical Education

2025·0 Zitationen·The Clinical TeacherOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

We live in a digital era. We swim in a sea of data. With every swipe of the phone or click of the mouse, we rely on artificial intelligence (AI). The rise of generative AI, such as ChatGPT, serves as an inflection point—we now realise our ever-increasing reliance on digital technologies. As we are confronted with freely available AI software that can answer questions, make predictions and create art, we understand that almost every aspect of our social existence has deep digital dimensions. How should health professional education, particularly clinical programs, respond to this? The literature highlights teaching about the technology itself, including the algorithms underpinning AI [1]. Additionally, there are many suggestions that we should be teaching ways to interrogate AI outputs—such as understanding that AI reflects the stereotypes and inaccuracies within its dataset. But this in some ways is too distant from our goal of graduating excellent health care professionals, who will work and continue to learn in the messy world of clinical practice [2]. I believe that understanding AI in situ—how it affects clinical practice itself—is key to our role in preparing health professionals to work in a time of AI. In my keynote, I reflected on the idea that AI is not just a tool that we use in pursuit of some goal. Rather, it is a presence in our day-to-day lives and also in clinical care. AI inserts itself into how we come to know, what knowledges we engage with and how we view patients and ourselves. As a society, we are challenged by conceptualising AI in a more nuanced way partly because it comes with cultural baggage. In particular, AI is associated with dystopian science fiction such as the controlling H.A.L. in 2001: A Space Odyssey, and such narratives can haunt our conceptualisations. In contrast, commonplace machine learning AI software (AIs) are not malign entities but (amazing) predictive algorithms. AIs do not reason, think or feel; rather, they provide inferences when prompted, based on statistical prediction. While this type of description avoids the dystopian fantasy by conceptualising AI as a technology or as providing some kind of particular functionality, it also overlooks its role as an active player in society [3]. Thus, we often consider what AI can do (write essays or recognise cancers)—or what it might do (transform patient care)—but we can overlook what AI is doing. This type of oversight is reflected in Yin et al.'s [4] systematic review of AI in patient care, which outlined limited and mixed impact on actual patient outcomes, despite many decades of publications describing what AI can or might do. And what is AI doing in the lives of our learners? In classrooms, students are using it for everyday tasks—to summarise readings [5], to create academic outputs [6], to complete quizzes and other assessments [7] and for feedback on work [8]. In clinical practice, trainees can also encounter AI on a day-to-day basis: They may be using AI to make diagnoses (especially of images) [9], take notes of consultations [10] and perform clinical informatics tasks [11]. Taken together, this provides some indication that AI is shifting both how we learn and work and that this has a broad range of implications for health professional education. I would like to draw attention to two ways of working with AI in clinical practice that I think we could usefully help our learners navigate. First, I think we should be focussing more on teaching our students how to work with doubt. People can experience epistemic dissonance when an AI output and another source (including their own judgement) are in conflict. For example, in an ethnography of radiology practice, a clinician has intensified doubt when the AI makes a prediction that a clinician deeply disagrees with [12]. Drawing from Hoeyer's work [13], Rola Ajjawi and I have conceptualised epistemic doubt as: … cognitive and affective: a state of uncertainty and discomfort. Thus, we can ask students to hold AI interactions in ‘epistemic doubt’—understanding that information within AI interactions may be partial or biased or possibly incorrect. And, this can lead to them to take information ‘on trust’, while holding epistemic doubt at the same time. In so doing, they can turn from trust to distrust and possibly back again …. [14] A consequence of the omnipresence of AI is that we need to support students at all stages—from classroom to clinic—to understand the role of epistemic doubt. We want students to ask themselves: What does it mean to trust AI outputs? And how does this play out in clinical practice—what happens or does not happen as a consequence of trust or mistrust? I would also like to suggest another focus for students when they work with AI (and similar algorithmic mediated data) within clinical practice. I think we have an opportunity to address the little disputed observation that AI reflects the biases, stereotypes and errors of its underlying data. Rola Ajjawi and I have described this problem in relation to gender, noting that ‘AI knowledge practices diminish the female and the feminine at scale’ [15]. For me, teaching critical engagement with AI should also reflect the realities of clinical practice. In my talk, I spoke about how patients can come to be represented by their ‘digital twin’ and how the characteristics associated with this twin—such as age, gender and so on—are likely to be imbued with stereotypes. We need to find ways for our learners to work with digital representations of patients (which will be increasingly mediated by AI or similar predictive algorithms), which allow them to understand that there is a distance between this digital representation and the lived reality of a human being. Teaching about AI should not separate out the human from the machine. Even learning embodied skills like physical examination is often integrated with the digital technologies [16]. However, it may also be useful to note that AIs, like many digital technologies in clinical practice [17], may appear invisible, due to their integration with day-to-day practice. That does not mean that we should ignore AI's effects on clinical practice. Rather, it means that we must collaborate with clinicians, students, administrators and patients to find how to learn to work in an AI-mediated clinical care environment. Margaret Bearman: writing – original draft, writing – review and editing. The author declares no conflicts of interest.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationHealthcare cost, quality, practicesClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen