Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
CausalAIME: Leveraging Peter-Clark Algorithms and Inverse Modeling for Unified Global Feature Explanation in Healthcare
1
Zitationen
1
Autoren
2025
Jahr
Abstract
Abstract In medical applications of machine learning, it is essential to interpret model behavior and explore the true association between features and clinical outcomes. Conventional Explainable AI approaches typically focus on “model-dependent” global feature importance from trained models, which may not guarantee “data-driven” causal relationships or medical consistency. To address this, we propose CausalAIME, a new framework that combines approximate inverse model explanations (AIME) with the Peter-Clark algorithm for causal discovery. This integration suppresses multicollinearity while enabling global visualization of both feature signs (positive or negative) and class-specific contributions. Furthermore, by choosing either the model’s output $$\hat{Y}$$ <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mover> <mml:mi>Y</mml:mi> <mml:mo>^</mml:mo> </mml:mover> </mml:math> or the true label Y as input, CausalAIME unifies both model-dependent and data-driven global feature importance within the same framework. Our experiments using breast cancer diagnostic data compared CausalAIME with existing methods such as Random Forest and SHAP, highlighting the advantages of CausalAIME in offering sign-based interpretability and causal perspectives, both critical in clinical settings. We anticipate that CausalAIME will contribute to enhanced explainability across various domains, including healthcare, by accommodating the needs for both true association analysis and model behavior interpretation.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.576 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.300 Zit.
"Why Should I Trust You?"
2016 · 14.396 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.