Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT as a Digital Pharmacist: A Systematic Review and Meta-Analysis of Drug-Counselling Accuracy
0
Zitationen
13
Autoren
2025
Jahr
Abstract
Abstract Background The emergence of Large Language Models (LLMs) like ChatGPT presents significant opportunities for healthcare, yet raises concerns about accuracy, especially in high-risk areas such as medication counseling. A comprehensive evaluation of ChatGPT’s reliability in providing drug information is crucial for its safe integration into clinical practice. This systematic review and meta-analysis aimed to assess the accuracy of drug-counseling information provided by ChatGPT 4. Methods Following PRISMA, we systematically searched PubMed, Embase, Scopus, and Web of Science on May 9, 2025, for original research evaluating the accuracy of ChatGPT (version 4 or newer) in drug-counseling queries. Included studies compared the AI’s output against standard comparators like pharmacists or drug databases. A random-effects meta-analysis was performed to calculate the pooled proportion of accurate responses, and study quality was assessed using a customized Newcastle-Ottawa Scale (NOS). Results The search identified 17 eligible studies. Of these, 15 were included in the meta-analysis, which showed a pooled accuracy rate of 86% (95% CI: 0.75–0.95). However, significant heterogeneity was observed across studies (I2=98.5%, p<0.0001). Quality of the studies was a concern, with only four studies (24%) rated as high quality. No evidence of publication bias was found (p=0.91). Conclusion ChatGPT demonstrates substantial promise in drug counseling, with an 86% accuracy rate that surpasses its performance in other medical domains. However, the high heterogeneity and a non-trivial 14% error rate, coupled with methodological weaknesses in the primary literature, indicate that ChatGPT is not yet ready for autonomous clinical use. Its current role should be as a supplementary tool under the strict supervision of qualified healthcare professionals to ensure patient safety.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- Islamic Azad University Medical Branch of Tehran(IR)
- TechnipFMC (United States)(US)
- Kermanshah University of Medical Sciences(IR)
- University of Tehran(IR)
- Tehran University of Medical Sciences(IR)
- Lorestan University(IR)
- Lorestan University of Medical Sciences(IR)
- Shahid Sadoughi University of Medical Sciences and Health Services(IR)
- Isfahan University of Medical Sciences(IR)
- Jahrom University(IR)
- Shahid Beheshti University of Medical Sciences(IR)
- Islamic Azad University, Tehran(IR)
- Islamic Azad University Pharmaceutical Sciences Branch(IR)
- Tabriz University of Medical Sciences(IR)
- Bushehr University of Medical Sciences(IR)