Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence in Perinatal Medicine: A Systematic Review of Current Applications, Limitations, and a Translational Roadmap for the Foundation-Model Era
0
Zitationen
12
Autoren
2025
Jahr
Abstract
Abstract Artificial intelligence (AI) is increasingly applied across perinatal care, yet the maturity of the evidence base and its readiness for routine practice remain uncertain. We conducted a preferred reporting items for systematic reviews and meta-analyses (PRISMA)-2020 systematic review to map applications, appraise quality, and outline translational requirements. We searched PubMed/MEDLINE, Embase, Scopus, Web of Science, IEEE Xplore, Cochrane Library, ClinicalTrials.gov/ICTRP, and medRxiv/bioRxiv from 2000 to 2 September 2025. Two reviewers independently screened records and extracted data, with disagreements resolved by a third reviewer. Eligibility criteria included human perinatal studies reporting AI model development or validation, prospective cohorts or trials, detailed protocols with explicit AI methods, and systematic or scoping reviews on applications, ethics, or equity. Studies that were nonAI, nonperinatal, abstract-only, or nonEnglish without translation were excluded. Risk of bias was assessed using the Newcastle–Ottawa Scale (observational), A Measurement Tool to assess systematic reviews, version 2 (AMSTAR-2) (systematic reviews), and risk of bias in systematic reviews (ROBIS) (reviews/scoping reviews). Heterogeneity precluded meta-analysis; synthesis followed synthesis without meta-analysis (SWiM) principles. Thirty-six studies met inclusion criteria, with twenty designated as a pre-specified “core” set based on decision relevance and quality. Applications spanned preconception (fertility, maternal risk), antenatal (FGR, preeclampsia, preterm birth, anomalies), intrapartum (delivery mode/timing, fetal monitoring), and neonatal outcomes (pulmonary hemorrhage, composite morbidity). Across imaging-plus-clinical and EHR-based models, discrimination often exceeded baseline tools, while calibration, external or temporal validation, subgroup performance, code/data availability, and impact evaluation were inconsistently reported. Limitations include retrospective designs, single-site datasets, outcome heterogeneity, English-language restriction, and publication bias. AI in perinatal medicine shows technical promise but uneven clinical readiness. We propose a staged roadmap emphasizing standardized data and reporting, multi-site and temporal validation with recalibration, interoperable workflow delivery, privacy-preserving and fair learning, and continuous calibration, uncertainty, and drift monitoring. Registration: none; funding: none.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.