Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Towards an Explainable Mortality Prediction Model
2
Zitationen
4
Autoren
2020
Jahr
Abstract
Influence functions are analytical tools from robust statistics that can help interpret the decisions of black-box machine learning models. Influence functions can be used to attribute changes in the loss function due to small perturbations in the input features. The current work on using influence functions is limited to the features available before the last layer of deep neural networks (DNNs). We extend the influence function approximation to DNNs by computing gradients in an end-to-end manner and relate changes in the loss function to individual input features using an efficient algorithm. We propose an accurate mortality prediction neural network and show the effectiveness of extended influence functions on the eICU dataset. The features chosen by proposed extended influence functions were more like those selected by human experts than those chosen by other traditional methods.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.