Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Rethinking Explainable AI: Explanations can be Deceiving
0
Zitationen
4
Autoren
2025
Jahr
Abstract
The propensity to overtrust explanations and over-rely on systems that seem transparent makes humans vulnerable to output that conforms to explainable AI (XAI) best practice. Human-centred XAI research seeks to determine the type of explanation most appropriate in any particular context. Other disciplines, meanwhile, provide insights into the way deception has tended to arise in relation to AI systems. Examining XAI research in this context, we find it a perfect melting pot for the generation of deceptive explanations. We demonstrate the problem in a user study and provide and evaluate recommendations for stakeholders.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.