Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainability of deep learning models in medical image classification
2
Zitationen
4
Autoren
2022
Jahr
Abstract
The ability to explain the reasons for one’s decisions to others is an important aspect of being human intelligence. We will look at the explainability aspects of the deep learning models, which are most frequently used in medical image processing tasks. The Explainability of machine learning models in medicine is essential for understanding how the particular ML model works and how it solves the problems it was designed for. The work presented in this paper focuses on the classification of lung CT scans for the detection of COVID-19 patients. We used CNN and DenseNet models for the classification and explored the application of selected visual explainability techniques to provide insight into how the model works when processing the images.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.374 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.261 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.126 Zit.