Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Use of machine learning in pediatric surgical clinical prediction tools: A systematic review
15
Zitationen
6
Autoren
2023
Jahr
Abstract
PURPOSE: Clinical prediction tools (CPTs) are decision-making instruments utilizing patient data to predict specific clinical outcomes, risk-stratify patients, or suggest personalized diagnostic or therapeutic options. Recent advancements in artificial intelligence have resulted in a proliferation of CPTs created using machine learning (ML)-yet the clinical applicability of ML-based CPTs and their validation in clinical settings remain unclear. This systematic review aims to compare the validity and clinical efficacy of ML-based to traditional CPTs in pediatric surgery. METHODS: Nine databases were searched from 2000 until July 9, 2021 to retrieve articles reporting on CPTs and ML for pediatric surgical conditions. PRISMA standards were followed, and screening was performed by two independent reviewers in Rayyan, with a third reviewer resolving conflicts. Risk of bias was assessed using the PROBAST. RESULTS: Out of 8300 studies, 48 met the inclusion criteria. The most represented surgical specialties were pediatric general (14), neurosurgery (13) and cardiac surgery (12). Prognostic (26) CPTs were the most represented type of surgical pediatric CPTs followed by diagnostic (10), interventional (9), and risk stratifying (2). One study included a CPT for diagnostic, interventional and prognostic purposes. 81% of studies compared their CPT to ML-based CPTs, statistical CPTs, or the unaided clinician, but lacked external validation and/or evidence of clinical implementation. CONCLUSIONS: While most studies claim significant potential improvements by incorporating ML-based CPTs in pediatric surgical decision-making, both external validation and clinical application remains limited. Further studies must focus on validating existing instruments or developing validated tools, and incorporating them in the clinical workflow. TYPE OF STUDY: Systematic Review LEVEL OF EVIDENCE: Level III.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.549 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.941 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.