Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How to produce complementary explanations using an Ensemble Model
18
Zitationen
3
Autoren
2019
Jahr
Abstract
In order to increase the adoption of machine learning models in areas like medicine and finance, it is necessary to have correct and diverse explanations for the decisions that the models provide, to satisfy the curiosity of decision-makers and the needs of the regulators. In this paper, we introduced a method, based in a previously presented framework, to explain the decisions of an Ensemble Model. Moreover, we instantiate the proposed approach to an ensemble composed of a Scorecard, a Random Forest, and a Deep Neural Network, to produce accurate decisions along with correct and diverse explanations. Our methods are tested on two biomedical datasets and one financial dataset. The proposed ensemble leads to an improvement in the quality of the decisions, and in the correctness of the explanations, when compared to its constituents alone. Qualitatively, it produces diverse explanations that make sense and convince the experts.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.