Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The potential of ChatGPT as clinical diagnostician for aortic dissection
0
Zitationen
5
Autoren
2024
Jahr
Abstract
Aortic dissection (AD), commonly known as acute aortic syndrome, which also encompasses conditions such as hematoma, aortic ulcer, and thoracic aortic aneurysm rupture, is the most frequent and devastating diseases. In this study, we evaluate ChatGPT in diagnosis AD and compare the ChatGPT-4.0 and the ChatGPT-3.5. We conducted a prospective study via public case report literature to investigate whether ChatGPT-4.0 and ChatGPT-3.5 can provide correct diagnosis based on clinical information and image feature information. The results were analyzed using commonly used statistical methods regarding different category patterns (Gender, Disease, Age, Admission_date_time Or Published_time). For primary diagnose, the accuracy rates of ChatGPT-4.0 were very higher than the ChatGPT-3.5 (0.856 vs. 0.782, p<0.05). For secondary diagnoses, the accuracy rates of ChatGPT-4.0 were also significantly higher than ChatGPT-3.5 (0.856 vs. 0.782, p<0.05). ChatGPT-3.5 has limitations in its ability for patient history, symptom presentation, laboratory tests, and imaging data. ChatGPT-4.0 has limitations in identifying symptoms and laboratory test data by classifying and analyzing misdiagnosis causes. Both for primary and secondary diagnose, there was significant difference in Gender, Disease, Age, and Admission_date_time Or Published_time group in ChatGPT-4.0 and ChatGPT-3.5 (p>0.05). Both ChatChatGPT3.5 and ChatChatGPT4.0 satisfactory for addressing fundamental inquiries related to Aortic dissection disease. ChatGPT has potential in medical diagnosis, and accuracy of ChatGPT-4.0 is better than that of ChatGPT-3.5 in the disease diagnose, but ChatGPT-4.0 still has limitations regarding patient symptoms and laboratory data recognition. Further studies which performed in the dynamic clinical practice environment is needed.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.