Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Estimating Uncertainty in Deep Learning for Reporting Confidence to Clinicians when Segmenting Nuclei Image Data
23
Zitationen
4
Autoren
2019
Jahr
Abstract
Deep Learning, which involves powerful black box predictors, has achieved a state-of-the-art performance in medical image analysis such as segmentation and classification for diagnosis. However, in spite of these successes, these methods focus exclusively on improving the accuracy of point predictions without assessing the quality of their outputs. Knowing how much confidence there is in a prediction is essential for gaining clinicians' trust in the technology. Monte-Carlo dropout in neural networks is equivalent to a specific variational approximation in Bayesian neural networks and is simple to implement without any changes in the network architecture. It is considered state-of-the-art for estimating uncertainty. However, in classification, it does not model the predictive probabilities. This means that we are not capturing the true underlying uncertainty in the prediction. In this paper, we propose an uncertainty estimation framework for classification by decomposing predictive probabilities into two main types of uncertainty in Bayesian modelling: aleatoric and epistemic uncertainty (representing uncertainty in the quality of the data and in the model parameters, respectively). We demonstrate that the proposed uncertainty quantification framework using the Bayesian Residual U-Net (BRUNet) provides additional insight for clinicians when analysing images with help from deep learners. In addition, we demonstrate how the resulting uncertainty depends on the dropout rates using images from nuclei in divergent medical images.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.811 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.337 Zit.
"Why Should I Trust You?"
2016 · 14.618 Zit.
Generative adversarial networks
2020 · 13.229 Zit.