OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 06.05.2026, 11:44

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable Artificial Intelligence in Critical Decision-Making Systems

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

The deployment of artificial intelligence (AI) in critical decision-making domains—including healthcare diagnostics, financial risk assessment, and autonomous vehicle navigation—has intensified the demand for transparency and interpretability in algorithmic reasoning. Explainable Artificial Intelligence (XAI) has emerged as a pivotal research paradigm aimed at rendering complex machine learning models comprehensible to human stakeholders without substantially compromising predictive performance. This paper presents a comprehensive survey of XAI methodologies, categorizing them into model-agnostic approaches (LIME, SHAP, Anchors), gradient-based techniques (Grad-CAM, Integrated Gradients), and inherently interpretable architectures (decision trees, CORELS, Explainable Boosting Machines). We systematically evaluate these methods across three critical application domains, comparing their explanation fidelity, computational overhead, and alignment with regulatory requirements such as the EU AI Act and GDPR's right to explanation. Our analysis reveals that SHAP achieves the highest average fidelity score (0.88) across domains, while inherently interpretable models offer superior transparency at the cost of reduced capacity for modeling complex non-linear relationships. We further identify key research gaps, including the absence of standardized evaluation benchmarks and the challenge of balancing faithfulness with human comprehensibility. The findings inform practical guidelines for selecting XAI techniques appropriate to specific deployment contexts and regulatory constraints.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen