Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT as a medical doctor? A diagnostic accuracy study on common and rare diseases
60
Zitationen
4
Autoren
2023
Jahr
Abstract
Abstract Seeking medical advice online has become popular in the recent past. Therefore a growing number of people might ask the recently hyped ChatGPT for medical information regarding their conditions, symptoms and differential diagnosis. In this paper we tested ChatGPT for its diagnostic accuracy on a total of 50 clinical case vignettes including 10 rare case presentations. We found that ChatGPT 4 solves all common cases within 2 suggested diagnoses. For rare disease conditions ChatGPT 4 needs 8 or more suggestions to solve 90% of all cases. The performance of ChatGPT 3.5 is consistently lower than the performance of ChatGPT 4. We also compared the performance between ChatGPT and human medical doctors. We conclude that ChatGPT might be a good tool to assist human medical doctors in diagnosing difficult cases, but despite the good diagnostic accuracy, ChatGPT should be used with caution by non-professionals.
Ähnliche Arbeiten
The Strengths and Difficulties Questionnaire: A Research Note
1997 · 14.535 Zit.
Making sense of Cronbach's alpha
2011 · 13.681 Zit.
QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies
2011 · 13.546 Zit.
A method for estimating the probability of adverse drug reactions
1981 · 11.454 Zit.
Evidence-Based Medicine
1992 · 4.135 Zit.