Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Interpretable machine learning model predicts postoperative complications after thoracoscopic mediastinal tumor surgery: a multicenter study
0
Zitationen
11
Autoren
2026
Jahr
Abstract
Postoperative complications of mediastinal tumors significantly impact patients’ quality of life and long-term outcomes; however, a notable gap in the development of predictive tools for their occurrence remains. This study aimed to develop and validate a machine-learning model predicting thoracoscopic resection complications. Patients who underwent thoracoscopic mediastinal tumor resection at Southwest Hospital (January 2014 to April 2024) were retrospectively enrolled (n = 302) and randomly divided into training (70%) and validation (30%) sets. An additional 21 patients from Banan Hospital who underwent the same procedure (October 2023 to April 2024) were included as an external test set. The primary endpoint was postoperative complications within 90 days, with severe complications (Clavien-Dindo grade ≥Ⅱ) as the secondary endpoint. Fifteen predictive models were constructed using three feature selection methods and five machine learning algorithms. Model performance was assessed by AUC, and interpretability was analyzed using SHAP. The optimal model was selected based on the highest AUC values in validation set. Among the 302 patients in the main center, postoperative complications were observed in 92 (43.6%) in the training set and 40 (44.0%) in the test set. The Lasso-random forest model performed best, incorporating features like maximum tumor diameter, past medical history, surgical approach, myasthenia gravis, and hypertension (ranked by SHAP-derived feature importance). It achieved an AUC of 0.799 (95% CI: 0.700–0.897), showing robust discrimination and classification ability. The first web-based machine learning predictive model was developed to guide perioperative management and intraoperative decision-making for high-risk patients.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.496 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.386 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.848 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.562 Zit.