Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI Chatbot–Facilitated Clinical Simulation and Peer Role-Play in OSCE Preparation: A Pilot Randomized Controlled Trial on Feasibility and Educational Impact
0
Zitationen
8
Autoren
2025
Jahr
Abstract
<title>Abstract</title> <bold>Background:</bold> Artificial intelligence (AI) is increasingly applied in medical education, but its role in fostering interactive clinical competencies remains underexplored. This pilot study aimed to compare the feasibility and educational impact of an AI chatbot–based simulation with traditional peer role-play (PRP) for Objective Structured Clinical Examination (OSCE) preparation, and to share practical lessons from implementing a novel AI tool in a trial setting. <bold>Methods:</bold> Nineteen final-year Korean medicine students were randomly assigned to either an AI chatbot group (n = 9) or a PRP group (n = 10) after a baseline knowledge test. Both groups underwent a 30-minute physical examination practice session, followed by a one-hour clinical interview training session specific to their group. The AI chatbot group practiced with a text-based chatbot providing scenario-driven responses and automated feedback, while the PRP group practiced in pairs under tutor supervision. All participants then completed two OSCE stations (dizziness and shoulder pain). Performance was assessed using a structured checklist covering four domains: history taking, physical examination, patient education, and physician-patient interaction. Post-study questionnaires evaluated the learning experience. <bold>Results:</bold> Although the differences in OSCE scores between the groups did not reach statistical significance, several interesting and complementary trends were observed. For example, the PRP group tended to score higher in history taking (mean 74.4 vs. 66.2 in dizziness scenario, mean 54.5 vs 58.6 in shoulder pain scenario), while the AI chatbot group showed a tendency towards higher scores in patient education (32.5 vs. 22.2 in dizziness scenario, 85.0 vs. 66.7 in shoulder pain scenario). Survey results reflected these following trends. The PRP group valued the authenticity of the interaction and the exam-like environment. In contrast, the AI chatbot group reported higher satisfaction with the autonomy, opportunity for repetitive practice, and structured feedback. <bold>Conclusion:</bold> In this pilot study, AI chatbot–based training and PRP demonstrated complementary strengths for OSCE preparation. While PRP appears effective for developing performance-based procedural and communication skills in a realistic setting, AI chatbots show potential for fostering clinical reasoning in a self-paced, reflective learning environment. A blended learning model that strategically combines both methods may be the most effective approach to enhance students' overall clinical competence. Further research is needed to validate these preliminary findings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.