Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
XAI-MRI: A Multilayered Framework for Explainable AI driven Medical Risk Identification
0
Zitationen
3
Autoren
2025
Jahr
Abstract
The adoption of Artificial Intelligence (AI) in healthcare has introduced new capabilities for predictive modeling, enabling earlier detection of adverse health events and more precise patient risk assessment. Predictive analytics, when combined with real-world clinical data, hold promises for enhancing diagnostic accuracy, optimizing resource allocation, and improving patient treatment and facilitating speedier recovery. However, as AI models become increasingly complex, their decision-making processes often lack transparency, which can hinder clinician trust and limit their integration into medical decision making. Explainable AI (XAI) addresses these concerns by making model behavior interpretable and actionable for end users, particularly clinicians who require clear justification for data-driven decisions. To bridge this gap, we propose XAI-MRI (Explainable AI for Medical Risk Identification), a modular, layered framework adapted from the previously established XAI-ARM architecture. XAI-MRI combines machine learning based risk prediction with explainability techniques such as SHAP to provide clinicians with clear, human understandable insights into model outputs. The framework incorporates multiple layers including data preprocessing, predictive modeling, explainability, and ethical compliance to ensure both technical robustness and alignment with healthcare regulatory standards. We demonstrate the utility of XAI-MRI through a clinical application involving the early prediction of Venous Thromboembolism (VTE) in hospitalized patients, which is a preventable, yet life threatening condition influenced by immobility, surgical procedures, and individual risk factors. The framework offers interpretable insights into risk drivers, aiding preventive decision making, reinforcing clinical confidence, and improving transparency in high-risk settings. By providing a structured framework for explainable risk prediction, this work supports the responsible integration of AI into real world clinical practice.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.