Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human-Centered Explainable Multimodal AI for Personalized Healthcare Diagnosis in Aging Populations
0
Zitationen
6
Autoren
2025
Jahr
Abstract
With the growing availability of multimodal health data including patient behavior signals, clinical text, and medical images, AI systems have an increasing potential to support early disease detection and decision-making in aging populations. As healthcare systems evolve into complex socio-technical environments, integrating explainable AI with human-centered design is essential to improve transparency, trust, and adoption. In this study, we propose a multimodal human-centered deep learning framework (MDLHC) to support breast cancer diagnosis by integrating patient data, imaging, and explainable attribution (XAI) methods. The framework is designed with the social and cognitive needs of elderly patients and clinicians in mind and uses demographic data and mammography images for personalized diagnostics. To bridge structured and unstructured data, we incorporate large language models (LLMs) for natural language alignment and summarization of clinical insights. For improved image understanding, a CNN-based residual integrated attention (RIAC) module is applied for noise reduction, followed by optimized feature selection using evolutionary PSO with Laplacian centrality (EPSO-LC). Classification is performed via a deep backpropagation CNN (DL-BP-CNN), enhanced with two explainability modules: ensemble random SHAP (ERS) and submodular selection-based LIME (SMS-LIME) for both localized and global transparency. This framework contributes to the development of a transparent, accurate, and personalized AI system that aligns with the scope of computational social systems in healthcare care, particularly for aging populations that require reliable diagnostic support.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.567 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.299 Zit.
"Why Should I Trust You?"
2016 · 14.391 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.