Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Developing Critical Judgment of Artificial Intelligence in Medical Education: Applied Insights
0
Zitationen
1
Autoren
2026
Jahr
Abstract
<ns3:p>Background Artificial intelligence (AI) is rapidly transforming medical education and clinical practice. AI-driven clinical decision support systems, diagnostic tools, and smart tutoring systems are helping teach and guide medical students on developing clinical reasoning skills and making better-informed patient care decisions. AI literacy initiatives have grown in recent years to increase understanding of both how AI works and how to utilize it; however, medical educators receive minimal guidance regarding how to instruct their learners to appropriately question or override AI recommendations. Thus, the educational gap created by a lack of guidance places learners at risk of automation bias,the tendency to over-rely on computer-based recommendations, regardless of whether they conflict with clinical judgement or individual patient situation. Applied Insights The Applied Insights presented in this article are organized around commonly encountered educational contexts where learners interact with AI-assisted decision-making. They offer actionable strategies for helping learners recognize when AI recommendations should be questioned, contextualized, or overridden. For example, mismatches between patients and training populations, incomplete or inaccurate input data, and misalignment between the system’s priorities and the patient’s values. The Applied Insights presented are based on well-established literature on automation bias, patient safety, and clinical decision-making, and were written to be non-technology-specific, useful across multiple specialties and resources, and adaptable to current curricula without requiring AI-specific knowledge. Conclusion Medical educators have a responsibility to prepare learners to use AI safely in clinical practice. By providing strategies for teaching when and how to question AI recommendations, this article supports the development of professional judgment, patient-centered decision-making, and safe integration of AI in health professions education.</ns3:p>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.