Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating How Explainable Ai-Driven Financial and Clinical Risk Models Improve Global Maternal, Neonatal and Mental Health Outcomes in Resource-limited Settings
0
Zitationen
10
Autoren
2026
Jahr
Abstract
Outcomes associated with maternal, neonatal, and perinatal mental health issues have, unfortunately, remained unacceptably low in resource-limited countries, partly because of inefficient risk factor frameworks and a lack of trust in the use of artificial intelligence-based decision-support systems. The current study assesses the potential of artificial intelligence-driven financial and clinical risk prediction models, incorporating explainable artificial intelligence, to improve outcome predictions and, at the same time, enable transparency and usability within the context of health system decision-making. The study used a cross-sectional, questionnaire-based study design, obtaining clinical, psychosocial, and financial information from women of reproductive age presenting to the maternal and neonatal services. Latent variables were derived using exploratory factor analysis, and internal consistency testing was done via Cronbach’s alpha. Factor scores standardized for clinical risk, financial susceptibility, psychosocial distress, health system access, and trust in explainability were used as inputs in the explainable machine-learning algorithms, comprising gradient boosting algorithms and logistic regression models involving post-hoc explanations. This manuscript introduces predictive models that spot maternal complications, neonatal risks, and perinatal mental health concerns with impressive accuracy AUC scores range from 0.79 to 0.87, a clear step up from earlier single-domain models that usually landed below 0.75. The system stands out in real-world settings, especially where resources are tight, because it pulls together clinical, financial, and mental health risk factors into one explainable AI tool. What’s really useful is how the model pinpoints high-risk groups who face both medical and financial challenges, so healthcare teams can focus their efforts where they’ll make the biggest difference. In short, this approach delivers better predictions, more transparency, and stronger policy impact than past methods that kept these issues separate. These results show that AI explanation systems, based on proven frameworks of measurement, can improve prediction without sacrificing interpretability and represent a viable and morally sound path towards improving maternal and child health, and mental health, in settings with limited resources.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.326 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.218 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.111 Zit.
Autoren
Institutionen
- Mississippi State University(US)
- Buffalo State University(US)
- University at Buffalo, State University of New York(US)
- Western Illinois University(US)
- Concordia University Wisconsin(US)
- Artificial Intelligence in Medicine (Canada)(CA)
- University of Ibadan(NG)
- University of Oklahoma(US)
- Center for Clinical Care and Research in Nigeria(NG)
- Adler Graduate School(US)
- Abia State University(NG)
- Delta State Polytechnic Ogwashi-Uku(NG)