Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Maternal Health Risk Prediction in Bangladesh: A Hybrid Fuzzy-XGBoost Framework with Clinician Validation
0
Zitationen
3
Autoren
2026
Jahr
Abstract
<title>Abstract</title> Bangladesh faces a maternal mortality ratio of 156 per 100,000 live births, with 2,459 maternal deaths reported in 2022. While machine learning shows promise in risk prediction, black-box models limit clinical adoption in resource-constrained settings where explainability is crucial. This study develops a hybrid fuzzy-XGBoost framework combining ante-hoc fuzzy logic interpretability with post-hoc SHAP explanations, validated through clinician feedback. We trained the model on 1,014 maternal health records with clinical parameters (age, blood pressure, blood sugar) augmented with synthetic regional features based on Bangladesh health data. The hybrid model achieved 88.67% accuracy with ROC-AUC of 0.9703, outperforming the best baseline (Gradient Boosting: 86.21%) by 2.46 percentage points. SHAP analysis identified healthcare access score (most important), blood sugar, and fuzzy risk score as primary predictors. Clinician validation (N=14) showed strong preference for hybrid explanations (71.4% across cases) with 54.8% expressing trust in clinical practice. Fairness analysis revealed equitable performance across regions (σ=0.0766), with better accuracy in underserved areas (r=-0.876 correlation with healthcare access), highlighting potential to address disparities.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.