OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.04.2026, 23:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Advancing medical education in cervical cancer control with large language models for multiple-choice question generation

2025·1 Zitationen·Medical Teacher
Volltext beim Verlag öffnen

1

Zitationen

9

Autoren

2025

Jahr

Abstract

OBJECTIVE: To explore the feasibility of using large language models (LLMs) to generate multiple-choice questions (MCQs) for cervical cancer control education and compare them with those created by clinicians. METHODS: GPT-4o and Baichuan4 generated 40 MCQs each with iteratively refined prompts. Clinicians generated 40 MCQs for comparison. 120 MCQs were evaluated by 12 experts across five dimensions (correctness, clarity and specificity, cognitive level, clinical relevance, explainability) using a 5-point Likert scale. Difficulty and discriminatory power were tested by practitioners. Participants were asked to identify the source of each MCQ. RESULTS: Automated MCQs were similar to clinician-generated ones in most dimensions. However, clinician-generated MCQs had a higher cognitive level (4.00±1.08) than those from GPT-4o (3.68±1.07) and Baichuan4 (3.7±1.13). Testing with 312 practitioners revealed no significant differences in difficulty or discriminatory power among clinicians (59.51±24.50, 0.38±0.14), GPT-4o (61.89±25.36, 0.30±0.19), and Baichuan4 (59.79±26.25, 0.33±0.15). Recognition rates for LLM-generated MCQs ranged from 32% to 50%, with experts outperforming general practitioners in identifying the question setters. CONCLUSIONS: LLMs can generate MCQs comparable to clinician-generated ones with engineered prompts, though clinicians outperformed in cognitive level. LLM-assisted MCQ generation could enhance efficiency but requires rigorous validation to ensure educational quality.

Ähnliche Arbeiten