OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 16:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Medication safety and artificial intelligence: evidence, failure modes, and how to vet AI tools before they touch patient care

2025·0 Zitationen·Annals of Innovation in MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Medication harm remains common, with a substantial portion preventable at the stages of prescribing and administration. (1) I reviewed evidence on how artificial intelligence (AI)—including rule-based clinical decision support, machine learning (ML), natural language processing (NLP), and large language models (LLMs)—can improve medication safety, and how these tools can fail. Methods: On 10 February 2026, I searched PubMed for systematic reviews/meta-analyses, high-quality observational studies, human factors evidence, reporting/risk-of-bias guidance, and peer-reviewed LLM evaluations relevant to prescribing error detection, medication reconciliation, dosing support, adverse drug event (ADE) prediction, pharmacovigilance, and medication-related documentation. All included citations were verified on PubMed by confirming PMIDs and bibliographic details. Results: Evidence for CPOE/CDSS suggests improvements in practitioner performance and some reductions in preventable ADEs, but effects are heterogeneous and systems can facilitate new error types. (2–5) Safety alerts are frequently overridden (49%–96%), and evidence that interruptive prescribing alerts improve patient outcomes is limited. (6,7) Automation bias is a documented risk: correct CDSS can reduce omission errors, yet incorrect CDSS can increase omission errors and induce commission errors. (12,13,15) ML-based ADE prediction shows emerging performance (pooled AUC 0.72, 95% CI 0.68–0.75), but prediction studies frequently show high risk of bias, motivating use of TRIPOD/PROBAST and careful validation. (9–11,17) In pharmacovigilance using routinely collected data, methods and performance reporting are heterogeneous, and no method is uniformly superior. (18) In medication text workflows, LLMs can reduce specific direction errors when designed with domain logic and guardrails, but hallucinations and omissions remain safety-critical. (20,21) Conclusions: AI can improve medication safety when tasks are bounded, outputs are auditable, and systems are integrated with strong human factors design and lifecycle governance; deployment requires local validation, bias and dataset shift monitoring, and formal safety evaluation—especially for generative AI. (9–11,20–27)

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Pharmacovigilance and Adverse Drug ReactionsArtificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen