Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards Quantification of Explainability Algorithms
1
Zitationen
2
Autoren
2021
Jahr
Abstract
Despite the proliferation in the use cases and the escalation in accuracy of Deep Neural Networks, the fact that such networks are black box models hamper our reliability in these models. Explainable Artificial Intelligence (XAI) algorithms work on building the trust in such models by attempting to interpret the basis of the predictions. However, due to the lack of work focused on developing quantitative approaches for such explanations, these explanations remain subjective in nature. Hence, the current XAI models require endorsement from the domain expert to justify the explainability. In this paper, we propose an approach which aims at obtaining quantitative explanations of three CNN architectures (namely InceptionV3, InceptionResNetV2, NASNetLarge) implemented using LIME XAI algorithm by proposing four general desiderata - Consistency, Efficiency, Integrity and Preciseness. The obtained experimental results offer enough information to form a quantitative relationship between the explanations of the three CNN models.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.796 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.334 Zit.
"Why Should I Trust You?"
2016 · 14.607 Zit.
Generative adversarial networks
2020 · 13.215 Zit.