Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing the potential of ChatGPT-4 to accurately identify drug-drug interactions and provide clinical pharmacotherapy recommendations
3
Zitationen
3
Autoren
2024
Jahr
Abstract
Abstract Background Large language models (LLMs) such as ChatGPT have emerged as promising artificial intelligence tools to support clinical decision making. The ability of ChatGPT to evaluate medication regimens, identify drug-drug interactions (DDIs), and provide clinical recommendations is unknown. The purpose of this study is to examine the performance of GPT-4 to identify clinically relevant DDIs and assess accuracy of recommendations provided. Methods A total of 15 medication regimens were created containing commonly encountered DDIs that were considered either clinically significant or clinically unimportant. Two separate prompts were developed for medication regimen evaluation. The primary outcome was if GPT-4 identified the most relevant DDI within the medication regimen. Secondary outcomes included rating GPT-4’s interaction rationale, clinical relevance ranking, and overall clinical recommendations. Interrater reliability was determined using kappa statistic. Results GPT-4 identified the intended DDI in 90% of medication regimens provided (27/30). GPT-4 categorized 86% as highly clinically relevant compared to 53% being categorized as highly clinically relevant by expert opinion. Inappropriate clinical recommendations potentially causing patient harm were provided in 14% of responses provided by GPT-4 (2/14), and 63% of responses contained accurate information but incomplete recommendations (19/30). Conclusions While GPT-4 demonstrated promise in its ability to identify clinically relevant DDIs, application to clinical cases remains an area of investigation. Findings from this study may assist in future development and refinement of LLMs for drug-drug interaction queries to assist in clinical decision-making.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.