OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 02.05.2026, 17:53

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Towards Quantification of Explainability Algorithms

2021·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

2

Autoren

2021

Jahr

Abstract

Despite the proliferation in the use cases and the escalation in accuracy of Deep Neural Networks, the fact that such networks are black box models hamper our reliability in these models. Explainable Artificial Intelligence (XAI) algorithms work on building the trust in such models by attempting to interpret the basis of the predictions. However, due to the lack of work focused on developing quantitative approaches for such explanations, these explanations remain subjective in nature. Hence, the current XAI models require endorsement from the domain expert to justify the explainability. In this paper, we propose an approach which aims at obtaining quantitative explanations of three CNN architectures (namely InceptionV3, InceptionResNetV2, NASNetLarge) implemented using LIME XAI algorithm by proposing four general desiderata - Consistency, Efficiency, Integrity and Preciseness. The obtained experimental results offer enough information to form a quantitative relationship between the explanations of the three CNN models.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Adversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen