OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.05.2026, 14:36

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

[Translated article] ChatGPT in a theoretical examination of Orthopaedic Surgery and Traumatology: Clinical and educational value

2026·0 Zitationen·Revista Española de Cirugía Ortopédica y TraumatologíaOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

INTRODUCTION: ChatGPT, a generative artificial intelligence (AI) chatbot, represents a potential tool to support diagnosis, decision-making, and education in orthopaedic surgery and traumatology (OST). The primary aim of this study was to evaluate the ability of ChatGPT-4o to answer questions from a theoretical exam designed for OST residents. The secondary aim was to compare the chatbot's score and response patterns with those of residents, stratified by years of training. METHODS: This was a retrospective observational study. A theoretical OST exam administered in 2024 to residents at a Spanish tertiary hospital was analyzed. The exam comprised 48 multiple-choice questions (10 including images) across different subspecialties. The responses of ChatGPT-4o and the residents were recorded to compare accuracy rates. In addition, the ability to correctly answer questions was analyzed according to topic and association with images. RESULTS: ChatGPT-4o correctly answered 34 out of 48 questions (71%). Its accuracy rate was higher than the average of OST residents (67%), achieving a score comparable to fifth-year residents (70%). However, its performance was notably lower in image-based clinical or radiological questions (30% accuracy). CONCLUSION: ChatGPT-4o is capable of answering questions from a theoretical OST examination, achieving a score higher than the average of OST residents and comparable to that of the most experienced residents (fifth-year). However, the error rate was 29.2%, with a notably lower accuracy in questions involving images and those requiring complex clinical reasoning. The use of this AI model cannot replace the expertise and reasoning of medical professionals.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsDelphi Technique in Research
Volltext beim Verlag öffnen