Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Application of artificial intelligence in predicting the results of open-heart surgery: a scoping review
2
Zitationen
5
Autoren
2025
Jahr
Abstract
PURPOSE: This scoping review aims to synthesize research on artificial intelligence (AI) in predicting open-heart surgery outcomes, evaluating AI model performance, and identifying gaps in data quality, algorithmic bias, and clinical applicability to guide future advancements in personalized surgical planning and patient outcomes. METHODS: Conducted using the PRISMA-ScR guideline, the review involved a systematic search across PubMed, Web of Science, IEEE, and Scopus. Articles were included if they focused on open-heart surgery, utilized AI methods, and were published in English. Exclusion criteria included non-relevance to open-heart surgery, non-original research, and lack of AI techniques. Data extraction included study details, AI methods, and performance metrics. Descriptive statistics were used for analysis. RESULTS: Of the 64 included studies, 89.06% were retrospective. The most frequently employed algorithm was logistic regression (n = 41), followed by random forest in 38 studies and XGBoost in 32 studies for data analysis. Most studies focused on predicting postoperative outcomes. Mortality, acute kidney injury, and complications were the outcomes that more studies concentrated on. XGBoost, used in 32 studies, exhibited the best performance in 11 of these studies. Deep learning and hybrid models were underutilized. Major limitations included inconsistent model validation, limited prospective data, and lack of diversity in patient populations. CONCLUSION: AI demonstrates promising predictive capabilities in open-heart surgery, particularly through machine learning models. These models can already assist surgeons in real-world practice by supporting real-time risk stratification and personalized decision-making, such as identifying high-risk patients for targeted interventions. However, methodological limitations hinder clinical translation. Future work should emphasize prospective validation, explainable AI, and equitable data representation to enhance model reliability and applicability in real-world settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.697 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.602 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.127 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.872 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.