OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 16:40

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Visual attribution for deep learning segmentation in medical imaging

2022·8 Zitationen·Medical Imaging 2022: Image Processing
Volltext beim Verlag öffnen

8

Zitationen

2

Autoren

2022

Jahr

Abstract

Despite the widespread use of Convolutional Neural Networks (CNNs) for segmentation in medical imaging, there is yet to be a validated method to determine what regions of input images inform the models’ decisions. To advance the field, we have 1) modified three general, prevalent methods of classification-attribution to be applicable for use with segmentation models, 2) developed a novel method of attribution explicitly for segmentation models, and 3) formulated validation metrics for these attributions so results can be quantitatively compared. To adapt existing methods of classification-attribution, we newly employed a weighted sum across attribution maps from each post-bottleneck layer. For our novel method of attribution (Kernel-Weighted Contribution), a weighted sum of activations from each kernel across all post-bottleneck layers was weighted by each kernel’s dependent and independent contributions to the segmentation. We used the generated attribution maps to mask the input images and generate new predicted segmentations. The methods were then scored based on their sensitivity to region importance (Prediction Preserved) and ability to only attribute relevant regions (Image Preserved). All three adapted classification-attribution methods showed a significant increase in both Prediction Preserved and Image Preserved scores. Kernel-Weighted Contribution showed a median decreased Image Preserved score of 2-12% and increased Prediction Preserved score of 12-21% compared with modified classification attribution methods. These new methods provide insight into how segmentation models use regions of input images. Clinically relevant features can consequently be extracted from both foreground and relevant background regions. Additionally, the metrics of validation facilitate a quantitative and objective comparison of segmentation-attribution methods.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Radiomics and Machine Learning in Medical ImagingArtificial Intelligence in Healthcare and EducationAI in cancer detection
Volltext beim Verlag öffnen