OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.04.2026, 17:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Does Chat<scp>GPT</scp> Answer Otolaryngology Questions Accurately?

2024·18 Zitationen·The LaryngoscopeOpen Access
Volltext beim Verlag öffnen

18

Zitationen

3

Autoren

2024

Jahr

Abstract

OBJECTIVE: Investigate the accuracy of ChatGPT in the manner of medical questions related to otolaryngology. METHODS: A ChatGPT session was opened within which 93 questions were asked related to otolaryngology topics. Questions were drawn from all major domains within otolaryngology and based upon key action statements (KAS) from clinical practice guidelines (CPGs). Twenty-one "patient-level" questions were also asked of the program. Answers were graded as either "correct," "partially correct," "incorrect," or "non-answer." RESULTS: Correct answers were given at a rate of 45.5% (71.4% correct in patient-level, 37.3% CPG); partially correct answers at 31.8% (28.6% patient-level, 32.8% CPG); incorrect at 21.6% (0% patient-level, 28.4% CPG); and 1.1% non-answers (% patient-level, 1.5% CPG). There was no difference in the rate of correct answers between CPGs published before or after the period of data collection cited by ChatGPT. CPG-based questions were less likely to be correct than patient-level questions (p = 0.003). CONCLUSION: Publicly available artificial intelligence software has become increasingly popular with consumers for everything from story-telling to data collection. In this study, we examined the accuracy of ChatGPT responses to questions related to otolaryngology over 7 domains and 21 published CPGs. Physicians and patients should understand the limitations of this software as it applies to otolaryngology, and programmers in future iterations should consider giving greater weight to information published by well-established journals and written by national content experts. LEVEL OF EVIDENCE: N/A Laryngoscope, 134:4011-4015, 2024.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationRadiology practices and educationClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen