Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT‐4 performance in rhinology: A clinical case series
22
Zitationen
5
Autoren
2024
Jahr
Abstract
Chatbot Generative Pre-trained Transformer (ChatGPT)-4 indicated more than twice additional examinations than practitioners in the management of clinical cases in rhinology. The consistency between ChatGPT-4 and practitioner in the indication of additional examinations may significantly vary from one examination to another. The ChatGPT-4 proposed a plausible and correct primary diagnosis in 62.5% cases, while pertinent and necessary additional examinations and therapeutic regimen were indicated in 7.5%-30.0% and 7.5%-32.5% of cases, respectively. The stability of ChatGPT-4 responses is moderate-to-high. The performance of ChatGPT-4 was not influenced by the human-reported level of difficulty of clinical cases.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.
Autoren
Institutionen
- Centre National de la Recherche Scientifique(FR)
- Institut Universitaire des Systèmes Thermiques Industriels(FR)
- Hôpital de la Conception(FR)
- Centre de recherches sociologiques et politiques de Paris(FR)
- Aix-Marseille Université(FR)
- University of Sassari(IT)
- Laboratoire de Phonétique et Phonologie(FR)
- Université Sorbonne Nouvelle(FR)
- University of Mons(BE)
- Université Paris-Saclay(FR)
- Centre Hospitalier Universitaire de Saint-Pierre(BE)