Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence and machine learning approaches for patient safety in complex surgery: a review
1
Zitationen
10
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) and machine learning (ML) are increasingly being used in surgical care; however, their real-world impact on patient safety is not well established. This narrative review searched PubMed, Scopus, and Google Scholar for English-language studies published from January 1, 2015, to April 30, 2025, that evaluated AI and ML applications in complex surgery and reported quantitative patient safety outcomes. Eligible included studies were published between 2016 and 2025. In total, 21 studies were synthesized across the preoperative, intraoperative, and postoperative phases of the study. Preoperatively, ML models consistently outperformed traditional risk scores in identifying high-risk patients and anticipating technical difficulties. Intraoperatively, AI-enabled decision support reduced hypotension exposure in a randomized trial, and computer vision systems supported the safety-critical step verification and instrument tracking. Postoperatively, multimodal approaches combining electronic records, imaging, and smartphone wound photographs predicted complications, such as surgical site infection, and facilitated discharge planning. Emerging evidence from ambulatory surgery, imaging-guided triage, and specialty domains, alongside qualitative studies on workforce readiness, highlights implementation opportunities and human factor requirements. Most evidence is retrospective, single-center, or prototype stage with limited external validation and uncertain generalizability across settings, including low- and middle-income countries. Priorities include multicenter prospective trials, standardized outcomes and reporting, continuous bias and model drift monitoring, robust data infrastructure, and equity-focused implementation to translate algorithmic performance into fewer complications, deaths, and costs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.
Autoren
Institutionen
- SIMAD University(SO)
- Kurdistan Technical Institute
- Federal Neuro Psychiatric Hospital(NG)
- University of the East Ramon Magsaysay Memorial Medical Center(PH)
- University of Thessaly(GR)
- Chulalongkorn University(TH)
- Università Campus Bio-Medico(IT)
- Naval State University(PH)
- The Mountain Institute(US)
- Bukidnon State University(PH)
- Palompon Institute of Technology(PH)
- London School of Hygiene & Tropical Medicine(GB)