OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 09:30

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Interpretability of AI race detection model in medical imaging with saliency methods

2025·3 Zitationen·Computational and Structural Biotechnology JournalOpen Access
Volltext beim Verlag öffnen

3

Zitationen

9

Autoren

2025

Jahr

Abstract

Deep neural networks (DNNs) are powerful tools for classifying images. Using these convolutional models for medical images is challenging due to their complexity and large number of parameters, making it hard to find clinically meaningful explanations for their decisions. To overcome the opaqueness inherent to such models, saliency techniques suggest generating maps that highlight the regions of an image important for the DNN's prediction. DNN models have shown the capability of race detection from medical images of different modalities, which is concerning as they under-diagnose patients from historically under-served races. The objective of this paper is to use explainability methods to detect subtle bias that DNNs use to detect a patient's race from chest X-rays. Toward this end, we apply eight state-of-the-art methods and propose to evaluate their effectiveness. We demonstrate that the salient region's size is crucial to understanding network behavior. When the salient region covers 30% of the image, we find that only the Rise method is effective at locating salient areas, as it can both accurately predict a patient's race on chest X-ray images on its own and mislead the network on race detection when removed. We, therefore, note that saliency maps in the medical field should be used with caution, as there is no available ground truth, and the network may occasionally employ low-level image features to compute predictions.

Ähnliche Arbeiten