Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence in Critical Decision-Making Systems
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The deployment of artificial intelligence (AI) in critical decision-making domains—including healthcare diagnostics, financial risk assessment, and autonomous vehicle navigation—has intensified the demand for transparency and interpretability in algorithmic reasoning. Explainable Artificial Intelligence (XAI) has emerged as a pivotal research paradigm aimed at rendering complex machine learning models comprehensible to human stakeholders without substantially compromising predictive performance. This paper presents a comprehensive survey of XAI methodologies, categorizing them into model-agnostic approaches (LIME, SHAP, Anchors), gradient-based techniques (Grad-CAM, Integrated Gradients), and inherently interpretable architectures (decision trees, CORELS, Explainable Boosting Machines). We systematically evaluate these methods across three critical application domains, comparing their explanation fidelity, computational overhead, and alignment with regulatory requirements such as the EU AI Act and GDPR's right to explanation. Our analysis reveals that SHAP achieves the highest average fidelity score (0.88) across domains, while inherently interpretable models offer superior transparency at the cost of reduced capacity for modeling complex non-linear relationships. We further identify key research gaps, including the absence of standardized evaluation benchmarks and the challenge of balancing faithfulness with human comprehensibility. The findings inform practical guidelines for selecting XAI techniques appropriate to specific deployment contexts and regulatory constraints.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.869 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.346 Zit.
"Why Should I Trust You?"
2016 · 14.643 Zit.
Generative adversarial networks
2020 · 13.279 Zit.