Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT and Factual Knowledge Questions Regarding Clinical Pharmacy: Correspondence
0
Zitationen
2
Autoren
2024
Jahr
Abstract
Dear Editor, The article “Performance of ChatGPT on Factual Knowledge Questions Regarding Clinical Pharmacy” is the topic of present discussion in this letter.1 In this work, the researchers evaluated ChatGPT's ability to respond to factual knowledge inquiries regarding clinical pharmacy using a language model trained on medical literature. ChatGPT was asked 264 questions in all, and its answers were assessed for accuracy, consistency, substantiation quality, and repeatability. According to the findings, ChatGPT answered 79% of the questions correctly, outperforming pharmacists' accuracy rate of 66%. The agreement between ChatGPT's answers and the right answers was 95%. The fact that ChatGPT's performance was assessed using only 264 questions is one of the study's weaknesses. This might not adequately convey the limitations and strengths of the approach for a wider range of clinical pharmacy subjects. Furthermore, the study only included factual knowledge questions, which might not accurately capture the subtleties and complexities that are frequently present in clinical practice. Additionally, there might have been biases in the questions chosen or the standards of evaluation that the researchers employed. The lack of variety in the questions that are sent to ChatGPT and the possibility of irregularities in the independent pharmacists' evaluation of the substantiation's quality are two specific methodological shortcomings. Furthermore, when applying clinical pharmacy knowledge to real-world circumstances, ChatGPT's interpretative or reasoning abilities were not examined in this study. These elements are necessary for a thorough assessment of ChatGPT's usefulness in clinical settings. Extending the dataset of questions to include a greater variety of clinical pharmacy issues, including more intricate and nuanced scenarios, may be one of the research's future approaches. Furthermore, more research into ChatGPT's capacity to offer justifications and explanations for its conclusions might improve the tool's suitability for helping pharmacists make decisions. Studies with a longitudinal design could investigate ChatGPT's long-term effectiveness and evaluate how it affects clinical outcomes in pharmacy practice. Continuous upgrades and enhancements to ChatGPT might increase its functionality and solidify its position as a trustworthy resource for pharmacists as the technology advances. Hinpetch Daungsupawong: 50% ideas; writing; analyzing; approval. Viroj Wiwanitkit: 50 % ideas; supervision; approval. The authors declare no conflicts of interest.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.