Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparing Human- and ChatGPT-Generated Multiple-Choice Questions in Athletic Training Education
1
Zitationen
2
Autoren
2025
Jahr
Abstract
Context Creating well-written multiple-choice questions (MCQs) requires time and attention to detail. Artificial intelligence tools such as ChatGPT have the potential to assist faculty members in creating exam or practice questions. Objective To compare human-generated athletic training–related MCQs with those generated by ChatGPT for quality, clarity, relevance, and difficulty of the questions. Design Cross-sectional study. Patients or Other Participants Ninety-three athletic training faculty teaching in Commission on Accreditation of Athletic Training Education–accredited entry-level athletic training programs completed the survey. Eleven second-year graduate-level athletic training students completed the 20-question quiz. Main Outcome Measure(s) Faculty participants completed a 2-part survey in which they evaluated 10 pairs of MCQs for grammar, clarity, difficulty, terminology, and suitability using a 5-point Likert scale, and indicated which question they preferred. Each pair included a human-generated question and a ChatGPT-generated question on a similar topic. A student quiz was developed to evaluate question quality/difficulty. Second-year master’s students nearing graduation were asked to complete the 20-question quiz using the same questions found in the faculty survey. Results ChatGPT-generated Board of Certification–style questions used in this study have similar values for grammar, stem quality, answer quality, question difficulty, proper use of medical terminology, and suitability for content to human-generated questions for all 5 athletic training domains. Most ChatGPT-generated questions were easy to understand, used appropriate terminology, and had answer options that were similar in style and length. Conclusions ChatGPT is another tool that athletic training faculty may consider using to improve the quality and efficacy of exam question preparation. The data from this study suggest that faculty can effectively use ChatGPT for exam question preparation; however, faculty should understand that ChatGPT, like all tools, has its limitations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.