Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI versus human-generated multiple-choice questions for medical education: a cohort study in a high-stakes examination
46
Zitationen
7
Autoren
2025
Jahr
Abstract
ChatGPT-4o demonstrates the potential for efficiently generating MCQs but lacks the depth needed for complex assessments. Human review remains essential to ensure quality. Combining AI efficiency with expert oversight could optimise question creation for high-stakes exams, offering a scalable model for medical education that balances time efficiency and content quality.
Ähnliche Arbeiten
The Strengths and Difficulties Questionnaire: A Research Note
1997 · 14.538 Zit.
Making sense of Cronbach's alpha
2011 · 13.693 Zit.
QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies
2011 · 13.554 Zit.
A method for estimating the probability of adverse drug reactions
1981 · 11.455 Zit.
Evidence-Based Medicine
1992 · 4.136 Zit.