Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Interpretability of AI race detection model in medical imaging with saliency methods
3
Zitationen
9
Autoren
2025
Jahr
Abstract
Deep neural networks (DNNs) are powerful tools for classifying images. Using these convolutional models for medical images is challenging due to their complexity and large number of parameters, making it hard to find clinically meaningful explanations for their decisions. To overcome the opaqueness inherent to such models, saliency techniques suggest generating maps that highlight the regions of an image important for the DNN's prediction. DNN models have shown the capability of race detection from medical images of different modalities, which is concerning as they under-diagnose patients from historically under-served races. The objective of this paper is to use explainability methods to detect subtle bias that DNNs use to detect a patient's race from chest X-rays. Toward this end, we apply eight state-of-the-art methods and propose to evaluate their effectiveness. We demonstrate that the salient region's size is crucial to understanding network behavior. When the salient region covers 30% of the image, we find that only the Rise method is effective at locating salient areas, as it can both accurately predict a patient's race on chest X-ray images on its own and mislead the network on race detection when removed. We, therefore, note that saliency maps in the medical field should be used with caution, as there is no available ground truth, and the network may occasionally employ low-level image features to compute predictions.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.246 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.228 Zit.
"Why Should I Trust You?"
2016 · 14.150 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.091 Zit.