Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-Driven Approaches for Adverse Event Detection: A Systematic Review of Current Evidence
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Introduction: Hospital adverse events are a global patient safety problem that account for avoidable death, long-term disability, extended length of stay, and increased healthcare costs. Underreporting, wherein fewer than 10% of events are indeed recorded, is prevalent and is characterized primarily by cultural and organizational determinants. Artificial intelligence, in the form of machine learning and natural language processing, has been described as a potential tool for enhancing adverse events detection and prediction with the use of large-scale clinical data. Materials and Methods: PRISMA-DTA guidelines were followed in this systematic review. Scopus, PubMed, and Web of Science were searched employing keywords associated with adverse events, artificial intelligence methodologies (e.g., machine learning, deep learning, natural language processing), and healthcare settings. Inclusion criteria included original research on artificial intelligence-based solutions for the detection or prediction of adverse events such as medication errors, hospital-acquired infections, and complications during surgery. Reviews, meta-analyses, and non-artificial intelligence studies were excluded. Following screening, 15 studies were found to meet inclusion criteria. Results: The referenced studies show a shift from rule-based natural language processing models to advanced deep learning and Bidirectional Encoder Representations from Transformers models. Early approaches, i.e., Support Vector Machine classifiers, achieved AUC scores as high as 0.92, while later models (Random Forest, LightGBM, XGBoost) mirrored AUCs of over 0.93. Large language models achieved F1-scores of 0.84 for named entity recognition. Artificial intelligence models even identified unreported incidents. Discussion: Artificial intelligence-powered methods are transforming adverse events detection from retrospective to predictive, proactive monitoring. There remain some challenges, however, including limited external validation, class imbalance, and interpretability of complex models. Future studies must address explainable artificial intelligence, multicenter trials, and high-quality well-annotated datasets to offer secure clinical integration.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.561 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.452 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.948 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.