Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT versus human authors: A comparative study of concept maps for clinical reasoning training with virtual patients
0
Zitationen
8
Autoren
2025
Jahr
Abstract
This study investigates whether ChatGPT can generate clinically accurate and pedagogically valuable maps for clinical reasoning (CR) training. The aim is to assess its potential as a tool for supporting the creation of high-quality educational resources for CR training. We selected 10 diverse virtual patients (VPs) from the European iCoViP project. For each case, CR concept maps were generated by a custom ChatGPT model and compared to expert-created maps available in the CASUS VP system. The comparison encompassed structural metrics (number of concepts, connections, and graph density), clinical content quality (clinical expert evaluation of concept and connection validity), and pedagogical utility (medical educator assessment of clarity, abstraction, and progression). Statistical analysis included Student’s <i>t</i>-tests and interrater reliability using weighted Cohen’s kappa. ChatGPT-generated maps contained significantly more concepts and connections than expert maps, indicating higher structural complexity (<i>p</i> < 0.001), though graph density did not differ significantly. Clinician evaluations showed comparable clinical content quality across both groups, with no statistically significant differences in concept or connection ratings. The educational review revealed that while ChatGPT maps offered comprehensive information, they lacked abstraction, prioritization, and contextual alignment, occasionally exceeding the optimal cognitive load for learners. ChatGPT can reliably generate concept maps that match expert-level clinical accuracy. However, limitations in educational clarity and usability underscore the need for expert refinement. With appropriate oversight, large language models (LLMs) such as ChatGPT can support efficient development of learning resources for CR education.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.