Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Integrating ChatGPT into knowledge-retrieval tutorials in undergraduate medical education: a prospective evaluation of higher-order learning and feasibility
0
Zitationen
5
Autoren
2026
Jahr
Abstract
BACKGROUND: Generative artificial intelligence (AI) platforms such as ChatGPT present new opportunities to strengthen competency-based medical education (CBME). While retrieval practice is a proven strategy for enhancing long-term retention, its application to higher-order domains of Bloom's taxonomy within CBME, particularly when scaffolded by AI, remains underexplored. To our knowledge, this is the first longitudinal CBME study worldwide to evaluate supervised ChatGPT-assisted retrieval practice and its durability over time. METHODS: We conducted a six-month, prospective, non-randomised delayed-intervention study and all 270 third-year MBBS students (including supplementary batch) at a private medical college in South India were invited to participate in a faculty-supervised ChatGPT-assisted retrieval-practice intervention. Participants were allocated into four tutorial clusters; two received the intervention immediately, while the remaining two received it after a delay. The intervention comprised four weekly, two-hour sessions featuring higher-order multiple-choice questions, structured faculty-supervised interactions with ChatGPT, and guided metacognitive reflections. Outcomes assessed included MCQ performance at baseline, immediately post-intervention, and at one- and three-month follow-up, as well as student perceptions. Data analysis employed repeated-measures ANOVA and mixed-effects modelling. RESULTS: < 0.001). Gains were sustained at one month (+3.78, d = 0.99) and three months (+3.65, d = 0.95), demonstrating durable higher-order learning retention. Both Lower-achieving and Higher-achieving students improved, though the effect was greater among Lower-achieving students (d = 0.72 vs 0.49). Student feedback revealed high levels of satisfaction (mean 4.25 ± 0.88) and cognitive engagement (4.15 ± 0.92), while clarity of AI interaction received comparatively lower ratings (3.39 ± 1.19). CONCLUSION: Supervised ChatGPT-assisted retrieval practice produced sustained improvements in higher-order cognitive performance, with particularly strong benefits for Lower-achieving students. This scalable, standards-aligned model holds promise for advancing CBME globally and warrants further validation through multi-institutional trials incorporating performance-based and clinical outcomes.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.593 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.483 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.003 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.824 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.