OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.05.2026, 07:57

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Responsible AI for Sepsis Prediction: Bridging the Gap Between Machine Learning Performance and Clinical Trust

2026·0 Zitationen·Journal of Clinical MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

<b>Background:</b> Sepsis remains a leading cause of mortality in intensive care units (ICUs) worldwide. Machine learning models for clinical prediction must be accurate, fair, transparent, and reliable to ensure that physicians feel confident in their decision-making processes. <b>Methods:</b> We used the MIMIC-IV (version 3.1) database to evaluate several machine learning architectures, including Logistic Regression, XGBoost, LightGBM, LSTM (Long Short-Term Memory) networks and Transformer models. We predicted three main clinical targets-hospital mortality, length of stay, and septic shock onset-using artificial intelligence algorithms, with respect for responsible AI principles. Model interpretability was assessed using Shapley Additive Explanations (SHAP). <b>Results:</b> The XGBoost model demonstrated superior performance in prediction tasks, particularly for hospital mortality (AUROC 0.874), outperforming traditional LSTM networks, Transformers, and linear baselines. The importance analysis of the variables confirmed the clinical relevance of the model. <b>Conclusions:</b> While XGBoost and ensemble algorithms demonstrate superior predictive power for sepsis prognosis, their clinical adoption necessitates robust explainability mechanisms to gain trust among doctors.

Ähnliche Arbeiten