Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence in Plastic Surgery Education: A Global Multi-Model Benchmark of Large Language Models on the Plastic Surgery In-Service Training Examination
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Abstract Background Large Language Models (LLMs) are increasingly utilized in plastic surgery education. Prior studies have shown that flagship models can achieve high scores on medical examinations, including the Plastic Surgery In-Service Training Examination (PSITE). Yet evaluations often rely on single-shot accuracy of proprietary systems, neglecting stochastic variability and open-source or non-US alternatives. Objectives To comprehensively benchmark a globally representative cohort of 14 LLMs on the PSITE, assessing not only accuracy but also inter-run reliability and stochastic variability, and to evaluate their role as educational tools in plastic surgery training. Methods A cross-sectional study evaluated 7 proprietary and 7 open-source models using 100 text-based PSITE questions from the 2017–2018 examinations. Each model underwent five independent runs (n=7000 evaluations). Performance metrics included mean accuracy (%), Fleiss’ kappa (κ) for reliability, and the Coefficient of Variation (CV) for stability. Stratified analyses assessed performance across clinical domains, proprietary versus open-source architectures, and paid versus free subscription tiers. Results Claude Opus 4.5 (90.2%) and GPT-5.2 Pro (87.0%) achieved the highest accuracy. Proprietary models significantly outperformed open-source alternatives (mean 76.1% vs 60.2%) and demonstrated superior reliability (κ=0.84 vs κ=0.70). Stability varied, ranging from consistent error in Falcon H1 (CV=0.00%) to erratic instability in Mistral Medium (CV=32.2%). Conclusions Contemporary LLMs possess substantial plastic surgery knowledge, yet meaningful disparities in reliability persist. While proprietary models currently demonstrate superior reliability as educational tools, the presence of stochastic instability necessitates cautious adoption. Accuracy alone is insufficient to judge clinical utility; stability metrics are essential for selecting AI tools in surgical education.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.