OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 06:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating How Explainable Ai-Driven Financial and Clinical Risk Models Improve Global Maternal, Neonatal and Mental Health Outcomes in Resource-limited Settings

2026·0 Zitationen·Asian Journal of Advanced Research and ReportsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2026

Jahr

Abstract

Outcomes associated with maternal, neonatal, and perinatal mental health issues have, unfortunately, remained unacceptably low in resource-limited countries, partly because of inefficient risk factor frameworks and a lack of trust in the use of artificial intelligence-based decision-support systems. The current study assesses the potential of artificial intelligence-driven financial and clinical risk prediction models, incorporating explainable artificial intelligence, to improve outcome predictions and, at the same time, enable transparency and usability within the context of health system decision-making. The study used a cross-sectional, questionnaire-based study design, obtaining clinical, psychosocial, and financial information from women of reproductive age presenting to the maternal and neonatal services. Latent variables were derived using exploratory factor analysis, and internal consistency testing was done via Cronbach’s alpha. Factor scores standardized for clinical risk, financial susceptibility, psychosocial distress, health system access, and trust in explainability were used as inputs in the explainable machine-learning algorithms, comprising gradient boosting algorithms and logistic regression models involving post-hoc explanations. This manuscript introduces predictive models that spot maternal complications, neonatal risks, and perinatal mental health concerns with impressive accuracy AUC scores range from 0.79 to 0.87, a clear step up from earlier single-domain models that usually landed below 0.75. The system stands out in real-world settings, especially where resources are tight, because it pulls together clinical, financial, and mental health risk factors into one explainable AI tool. What’s really useful is how the model pinpoints high-risk groups who face both medical and financial challenges, so healthcare teams can focus their efforts where they’ll make the biggest difference. In short, this approach delivers better predictions, more transparency, and stronger policy impact than past methods that kept these issues separate. These results show that AI explanation systems, based on proven frameworks of measurement, can improve prediction without sacrificing interpretability and represent a viable and morally sound path towards improving maternal and child health, and mental health, in settings with limited resources.

Ähnliche Arbeiten