Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluation of the Quality and Reliability of ChatGPT-4's Responses on Allergen Immunotherapy Using Validated Tools
0
Zitationen
20
Autoren
2024
Jahr
Abstract
Background: Artificial Intelligence (AI) technologies could potentially change many aspects of clinical practice. While Allergen Immunotherapy (AIT) can change the course of allergic diseases providing relief of symptoms that extend for many years after treatment completion, it can also bring uncertainty to patients, who turn to readily available resources such as ChatGPT-4 to address these doubts. The aim of this study was to use validated tools to evaluate the information provided by ChatGPT-4 regarding AIT in terms of quality, reliability and readability. Methods: In accordance with AIT clinical guidelines, 24 questions were selected and introduced in ChatGPT-4. Answers were evaluated by a panel of allergists, using validated tools DISCERN, JAMA Benchmark and Flesch Reading Ease Score and Grade Level. Results: Questions were sorted into 6 categories. ChatGPT provided bad quality information according to DISCERN medians scores in the “Definition”, “Standardization and Efficacy”, and “Safety and Adverse Reactions” categories. It provided insufficient information according to JAMA Benchmark across all categories. Finally, ChatGPT-4 answers required a “college graduate” level of education to be understood as they were very difficult to read. Conclusions: ChatGPT-4 exhibits potential as a valuable complement to healthcare; however, it requires further refinement. The information it provides should be approached with caution regarding its quality, as significant details may be omitted or may not be fully comprehensible. Artificial intelligence models continue to evolve, and medical professionals should participate in this process, given that AI impacts various aspects of life, including health, to ensure the availability of optimal information.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
- Iván Chérrez-Ojeda
- Torsten Zuberbier
- Gabriela Rodas‐Valero
- Jorge Sánchez
- Michael Rudenko
- Stephanie Dramburg
- Pascal Demoly
- Davide Caimmi
- Maximiliano Gómez
- German D. Ramón
- G. E
- K. R.
- Herberto José Chong‐Neto
- Oscar Calderón Llosa
- José Ignacio Larco
- Olga Patricia Monge-Ortega
- Marco Faytong‐Haro
- Oliver Pfaar
- Jean Bousquet
- Karla Robles‐Velasco
Institutionen
- Charité - Universitätsmedizin Berlin(DE)
- Universidad de Especialidades Espíritu Santo(EC)
- Centre Hospitalier Universitaire de Montpellier(FR)
- Universidad Católica de Salta(AR)
- Food and Drug Administration(TH)
- University of the West Indies(JM)
- Hospital de Clínicas Universidade Federal do Paraná(BR)
- Universidad Estatal de Milagro(EC)
- Philipps University of Marburg(DE)