Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Interpretable Image Recognition Models for Big Data With Prototypes and Uncertainty
1
Zitationen
1
Autoren
2023
Jahr
Abstract
Deep neural networks have been applied to big data scenarios and have achieved good results. However, trust in high-accuracy deep neural networks is not necessarily achieved when facing high-risk decision scenarios. Every wrong decision has incalculable consequences in medicine, autonomous driving, and other fields. For an image recognition model based on big data to gain trust, it is necessary to simultaneously solve the interpretability of decisions and risk predictability. This paper introduces uncertainty into a self-explanatory image recognition model to show that the model can interpret decisions and predict risk. This approch enables the model to trust its decisions and explanations and to provide early warning and detailed analysis of risky decisions. In addition, this paper introduces the process of using uncertainty and explanations to achieve model optimization, which will significantly improve the application value of the model in high-risk scenarios. The scheme proposed in this paper solves the trust crisis caused by using black box image recognition models.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.