Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence (XAI): Concepts, Applications, Challenges, and Future Perspectives
0
Zitationen
6
Autoren
2026
Jahr
Abstract
The aim of explainable artificial intelligence (XAI) is to address the black-box problem in high-stakes applications. However, transparency alone does not guarantee trust. This review examines a critical paradox in XAI research. While explanation methods can generate insights, three main challenges limit their effectiveness. Firstly, adversarial manipulations can exploit explanations by creating new attack surfaces with over ninety percent success while preserving model accuracy. Secondly, evaluation practices remain primarily computational. Only twenty-six percent of user studies follow human-centered protocols and fewer than twenty-three percent involve domain experts. Thirdly, regulatory requirements, such as the GDPR right to explanation, lack clear technical implementations, complicating compliance. We analyzed the literature across finance, healthcare, and cybersecurity and found that current research emphasizes algorithmic innovation over practical deployment. Moving toward reliable AI requires shifting from simple explanation methods (XAI 1.0) to systems that are aligned with human understanding, resistant to adversarial attacks, and compliant with legal requirements (XAI 2.0). This review provides guidance on key technical advances, evaluation strategies and regulatory clarifications necessary for deployment. trustworthy AI.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.464 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.259 Zit.
"Why Should I Trust You?"
2016 · 14.315 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.138 Zit.