Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluation of the Accuracy and Reliability of Responses Generated by Artificial Intelligence Related to Clinical Pharmacology
0
Zitationen
5
Autoren
2025
Jahr
Abstract
<b>Background/Objectives:</b> Artificial intelligence (AI) is gaining importance in clinical pharmacology, supporting therapeutic decisions and the prediction of drug interactions, although its applications have significant limitations. The aim of the study was to evaluate the accuracy of the responses of four large language models (LLMs), namely ChatGPT-4o, ChatGPT-3.5, Gemini Advanced 2.0, and DeepSeek, in the field of clinical pharmacology and drug interactions, as well as to analyze the impact of prompting and questions from the National Specialization Examination for Pharmacists (PESF) on the results. <b>Methods:</b> In the analysis, three datasets were used: 20 case reports of successful pharmacotherapy, 20 reports of drug-drug interactions, and 240 test questions from the PESF (spring 2018 and autumn 2019 sessions). The responses generated by the models were compared with source data and the official examination key and were independently evaluated by clinical-pharmacotherapy experts. Additionally, the impact of prompting techniques was analyzed by expanding the content of the queries with detailed clinical and organizational elements to assess their influence on the accuracy of the obtained recommendations. <b>Results:</b> The analysis revealed differences in the accuracy of responses between the examined AI tools (<i>p</i> < 0.001), with ChatGPT-4o achieving the highest effectiveness and Gemini Advanced 2.0 the lowest. Responses generated by Gemini were more often imprecise and less consistent, which was reflected in their significantly lower level of substantive accuracy (<i>p</i> < 0.001). The analysis of more precisely formulated questions demonstrated a significant main effect of the AI tool (<i>p</i> < 0.001), with Gemini Advanced 2.0 performing significantly worse than all other models (<i>p</i> < 0.001). An additional analysis comparing responses to simple and extended questions, which incorporated additional clinical factors and the mode of source presentation, did not reveal significant differences either between AI tools or within individual models (<i>p</i> = 0.34). In the area of drug interactions, it was also shown that ChatGPT-4o achieved a higher level of response accuracy compared with the other tools (<i>p</i> < 0.001). Regarding the PESF exam questions, all models achieved similar results, ranging between 83 and 86% correct answers, and the differences between them were not statistically significant (<i>p</i> = 0.67). <b>Conclusions</b>: AI models demonstrate potential in the analysis of clinical pharmacology; however, their limitations require further refinement and cautious application in practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.