Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating Expert-Informed AI-based e-Training and Live Expert Training for Continuing Medical Education (CME): A Pilot Study
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Introduction Continuing medical education (CME) is essential for more than 100 million healthcare professionals worldwide, supporting enhancement of knowledge, skills, and patient outcomes. With a growing shift towards online and blended CME formats, expert-informed AI-based training (EIAT) is a novel e-training approach enabling rapid course creation and a timely content updates. However, its effectiveness remains underexplored. Methods A within-subject pre-post study was conducted with 17 medical students and professionals . Participants first completed a self-paced EIAT module, followed by live expert-led training (LET). Knowledge acquisition was measured via pre- and post-exams for each modality. Satisfaction, engagement, learning outcomes, and content quality were assessed using a 20-item CME-relevant feedback tool aligned with European Accreditation Council for CME standards. Paired t-test and McNemar’s tests were used to compare outcomes between the two training formats. Results EIAT completion was 76%, compared to 82% for LET. Pre-test scores were lower in LET (63 ± 11.4) than EIAT (73.8 ± 11.2), though not significantly different (p = 0.14). Mean knowledge gain was 17.3 ± 13.3 for EIAT (p = 0.0005) and 29.2 ± 13.2 for LET (p = 0.003), with no significant difference between modalities (mean difference 14.2 ± 21.5, p = 0.17) or in post-test scores (93.5 ± 5.5 vs. 93.3 ± 9.3, p = 0.79). Both formats achieved high satisfaction, with mean recommendation scores of 9.7 (SD = 0.67) for EIAT and 10 (SD = 0.0) for LET, respectively, and no significant differences across most evaluation domains (p > 0.05). Engagement was strong in both groups, though 71.4% of LET participants reported being ‘enthusiastically involved’ compared to 60% in EIAT, and 85.8% of LET participants gaining substantial practical knowledge versus 50% for EIAT (p = 0.08). Both formats were rated highly in terms of fairness, content quality, and absence of commercial bias. Conclusion: This pilot study provides preliminiary evidence that expert-informed AI is an effective alternative or complement to traditional live CME format. These findings offer further research into EIAT’s role in scalable, high-quality CME for healthcare professionals, laying the groundwork for future research and broader implementation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.