Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Utilizing Artificial Intelligence and Chat Generative Pretrained Transformer to Answer Questions About Clinical Scenarios in Neuroanesthesiology.
0
Zitationen
9
Autoren
2025
Jahr
Abstract
OBJECTIVE: We tested the ability of chat generative pretrained transformer (ChatGPT), an artificial intelligence chatbot, to answer questions relevant to scenarios covered in 3 clinical guidelines, published by the Society for Neuroscience in Anesthesiology and Critical Care (SNACC), which has published management guidelines: endovascular treatment of stroke, perioperative stroke (Stroke), and care of patients undergoing complex spine surgery (Spine). METHODS: Four neuroanesthesiologists independently assessed whether ChatGPT could apply 52 high-quality recommendations (HQRs) included in the 3 SNACC guidelines. HQRs were deemed present in the ChatGPT responses if noted by at least 3 of the 4 reviewers. Reviewers also identified incorrect references, potentially harmful recommendations, and whether ChatGPT cited the SNACC guidelines. RESULTS: The overall reviewer agreement for the presence of HQRs in the ChatGPT answers ranged from 0% to 100%. Only 4 of 52 (8%) HQRs were deemed present by at least 3 of the 4 reviewers after 5 generic questions, and 23 (44%) HQRs were deemed present after at least 1 additional targeted question. Potentially harmful recommendations were identified for each of the 3 clinical scenarios and ChatGPT failed to cite the SNACC guidelines. CONCLUSIONS: The ChatGPT answers were open to human interpretation regarding whether the responses included the HQRs. Though targeted questions resulted in the inclusion of more HQRs than generic questions, fewer than 50% of HQRs were noted even after targeted questions. This suggests that ChatGPT should not currently be considered a reliable source of information for clinical decision-making. Future iterations of ChatGPT may refine algorithms to improve its reliability as a source of clinical information.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.