OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 16:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

From explanation to intervention: Interactive knowledge extraction from Convolutional Neural Networks used in radiology

2024·5 Zitationen·PLoS ONEOpen Access
Volltext beim Verlag öffnen

5

Zitationen

5

Autoren

2024

Jahr

Abstract

Deep Learning models such as Convolutional Neural Networks (CNNs) are very effective at extracting complex image features from medical X-rays. However, the limited interpretability of CNNs has hampered their deployment in medical settings as they failed to gain trust among clinicians. In this work, we propose an interactive framework to allow clinicians to ask what-if questions and intervene in the decisions of a CNN, with the aim of increasing trust in the system. The framework translates a layer of a trained CNN into a measurable and compact set of symbolic rules. Expert interactions with visualizations of the rules promote the use of clinically-relevant CNN kernels and attach meaning to the rules. The definition and relevance of the kernels are supported by radiomics analyses and permutation evaluations, respectively. CNN kernels that do not have a clinically-meaningful interpretation are removed without affecting model performance. By allowing clinicians to evaluate the impact of adding or removing kernels from the rule set, our approach produces an interpretable refinement of the data-driven CNN in alignment with medical best practice.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Radiomics and Machine Learning in Medical ImagingArtificial Intelligence in Healthcare and EducationAI in cancer detection
Volltext beim Verlag öffnen