Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From explanation to intervention: Interactive knowledge extraction from Convolutional Neural Networks used in radiology
5
Zitationen
5
Autoren
2024
Jahr
Abstract
Deep Learning models such as Convolutional Neural Networks (CNNs) are very effective at extracting complex image features from medical X-rays. However, the limited interpretability of CNNs has hampered their deployment in medical settings as they failed to gain trust among clinicians. In this work, we propose an interactive framework to allow clinicians to ask what-if questions and intervene in the decisions of a CNN, with the aim of increasing trust in the system. The framework translates a layer of a trained CNN into a measurable and compact set of symbolic rules. Expert interactions with visualizations of the rules promote the use of clinically-relevant CNN kernels and attach meaning to the rules. The definition and relevance of the kernels are supported by radiomics analyses and permutation evaluations, respectively. CNN kernels that do not have a clinically-meaningful interpretation are removed without affecting model performance. By allowing clinicians to evaluate the impact of adding or removing kernels from the rule set, our approach produces an interpretable refinement of the data-driven CNN in alignment with medical best practice.
Ähnliche Arbeiten
New response evaluation criteria in solid tumours: Revised RECIST guideline (version 1.1)
2008 · 28.795 Zit.
TNM Classification of Malignant Tumours
1987 · 16.123 Zit.
A survey on deep learning in medical image analysis
2017 · 13.500 Zit.
Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening
2011 · 10.736 Zit.
The American Joint Committee on Cancer: the 7th Edition of the AJCC Cancer Staging Manual and the Future of TNM
2010 · 9.101 Zit.