Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A New Perspective on Evaluation Methods for Explainable Artificial Intelligence (XAI)
17
Zitationen
2
Autoren
2023
Jahr
Abstract
One of the big challenges in the field of explainable artificial intelligence (XAI) is how to evaluate explainability approaches. Many evaluation methods (EMs) have been proposed, but a gold standard has yet to be established. Several authors classified EMs for explainability approaches into categories along aspects of the EMs themselves (e.g., heuristic-based, human-centered, application-grounded, functionally-grounded). In this vision paper, we propose that EMs can also be classified according to aspects of the XAI process they target. Building on models that spell out the main processes in XAI, we propose that there are explanatory information EMs, understanding EMs, and desiderata EMs. This novel perspective is intended to augment the perspective of other authors by focusing less on the EMs themselves but on what explainability approaches intend to achieve (i.e., provide good explanatory information, facilitate understanding, satisfy societal desiderata). We hope that the combination of the two perspectives will allow us to more comprehensively evaluate the advantages and disadvantages of explainability approaches, helping us to make a more informed decision about which approaches to use or how to improve them.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.