OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 17:26

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Is ChatGPT's Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information?

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

<title>Abstract</title> <bold>Background:</bold> ChatGPT is a new artificial intelligence model designed to create human-like chat. As a result of advancing knowledge and technological improvements, it is promising in the field of medicine, especially as a resource that patients and clinicians can apply.<bold>Objective:</bold> The aim of our study is to measure the accuracy and consistency of ChatGPT's answers to questions in the field of rhinology.<bold>Methods:</bold> In March 2024, the ChatGPT (ChatGPT version 4) was presented with 130 questions in rhinology. Each question was asked to ChatGPT twice and the consistency/reproducibility of the answers was investigated. The answers were evaluated by three ENT physicians. The physicians followed a standardised 4-point format (1:Completely correct, 2:Partially correct, 3:A mix of accurate and inaccurate/misleading, 4:Completely incorrect/ irrelevant).<bold>Results:</bold> The answers given by ChatGPT were consistent at a rate of 91.5%(119/130). Among the inconsistent answers, the second answer was found to be more correct in 10/11. Statistically, the second answer was found to be more correct (p: 0011). In 130 questions, as a result of the controllers' evaluation, the number of answers evaluated as completely correct was 99/81/80(76.2%/62.3%/61.5%) respectively. However, completely incorrect answers were 7/6/7(5.4%/4.6%/5.4%), respectively. Accordingly, it is seen that there is no statistical difference between the controllers(p:0.270).<bold>Conclusion:</bold> The inaccuracy of ChatGPT in patient information and education process is considered to be at an acceptable level and reliable. However, it is also seen that ChatGPT answers are not completely correct and can give misleading answers to some questions. We believe that it would be safer and more accurate to use ChatGPT as an informative and educational material for patients with the control of experts.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic SkillsRadiology practices and education
Volltext beim Verlag öffnen