Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring Generative Artificial Intelligence-Assisted Medical Education: Assessing Case-Based Learning for Medical Students
33
Zitationen
5
Autoren
2024
Jahr
Abstract
The recent public release of generative artificial intelligence (GenAI) has brought fresh excitement by making access to GenAI for medical education easier than ever before. It is now incumbent upon both students and faculty to determine the optimal role of GenAI within the medical school curriculum. Given the promise and limitations of GenAI, this study aims to assess the current capabilities of a GenAI (Chat Generative Pre-trained Transformer, ChatGPT), specifically within the framework of a pre-clerkship case-based active learning curriculum. The role of GenAI is explored by evaluating its performance in generating educational materials, creating medical assessment questions, answering medical queries, and engaging in clinical reasoning by prompting it to respond to a problem-based learning scenario. Our results demonstrated that GenAI addressed epidemiology, diagnosis, and treatment questions well. However, there were still instances where it failed to provide comprehensive answers. Responses from GenAI might offer essential information, hint at the need for further inquiry, or sometimes omit critical details. GenAI struggled with generating information on complex topics, raising a significant concern when using it as a 'search engine' for medical student queries. This creates uncertainty for students regarding potentially missed critical information. With the increasing integration of GenAI into medical education, it is imperative for faculty to become well-versed in both its advantages and limitations. This awareness will enable them to educate students on using GenAI effectively in medical education.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.