Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative Artificial Intelligence in Medical Education: Enhancing Critical Thinking or Undermining Cognitive Autonomy? (Preprint)
0
Zitationen
5
Autoren
2025
Jahr
Abstract
<sec> <title>UNSTRUCTURED</title> Generative artificial intelligence (GenAI) enables the production of coherent and contextually relevant text by processing large-scale linguistic datasets. Tools such as ChatGPT, Gemini, Claude, and LLaMA are increasingly integrated into medical education, assisting students with a range of tasks, including clinical reasoning, literature review, scientific writing, and formative assessment. Although these tools offer significant advantages in terms of productivity, personalization, and cognitive support, their impact on critical thinking—a cornerstone of medical education—remains uncertain. The aim of this viewpoint paper is to critically assess the influence of GenAI on critical thinking within medical training, examining both its potential to enhance cognitive skills and the risks it poses to cognitive autonomy. Users have reported increased efficiency and improved linguistic output; however, concerns have also been raised regarding the risk of cognitive overreliance. Current evidence presents a mixed picture, indicating both improvements in learner engagement and potential drawbacks such as passivity or susceptibility to misinformation. Without curricular integration that prioritizes ethical use, prompt engineering, and critical evaluation, GenAI may compromise the cognitive autonomy of medical students. Conversely, when thoughtfully embedded into pedagogical frameworks, these tools can act as cognitive enhancers—supporting, rather than replacing, clinical reasoning. Medical education must adapt to ensure that future physicians engage with GenAI in a critical, ethical, and context-aware manner, especially in complex decision-making scenarios. This transformation demands not only technological fluency but also reflective practice and sustained oversight by faculty and academic institutions. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.