OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.05.2026, 02:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Interpreting the Predictions of Complex ML Models by Layer-wise\n Relevance Propagation

2016·29 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

29

Zitationen

5

Autoren

2016

Jahr

Abstract

Complex nonlinear models such as deep neural network (DNNs) have become an\nimportant tool for image classification, speech recognition, natural language\nprocessing, and many other fields of application. These models however lack\ntransparency due to their complex nonlinear structure and to the complex data\ndistributions to which they typically apply. As a result, it is difficult to\nfully characterize what makes these models reach a particular decision for a\ngiven input. This lack of transparency can be a drawback, especially in the\ncontext of sensitive applications such as medical analysis or security. In this\nshort paper, we summarize a recent technique introduced by Bach et al. [1] that\nexplains predictions by decomposing the classification decision of DNN models\nin terms of input variables.\n

Ähnliche Arbeiten

Autoren

Themen

Anomaly Detection Techniques and ApplicationsNeural Networks and ApplicationsMachine Learning in Healthcare
Volltext beim Verlag öffnen