Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable multimodal brain imaging through a multiple-branch neural network
0
Zitationen
4
Autoren
2025
Jahr
Abstract
• The role of each imaging modality is evaluated in the identification and segmentation of the lesions. • Changes in the importance of imaging modalities are evaluated for different segmentation tasks. • The impact and challenges of using AI-generated data instead of real counterpart scans are evaluated to check how the network can explain potential differences. Brain studies require the use of several complementary imaging modalities. When some modality is unavailable, Artificial Intelligence (AI) has recently provided ways to estimate them. Radiologists modulate the use of the available modalities depending on the task they have to perform. We aim to trace artificially the radiological process through a multibranch neural network architecture, the StarNet. The goal is to explain how and where different imaging modalities, either really collected or artificially reconstructed, are used in different radiological tasks by reading inside the structure of the network. To do that, StarNet includes several satellite networks, one per source modality, connected at each layer by a central unit. This design enables us to assess the contribution of each imaging modality, identifying where the contribution occurs, and to quantify the variations if certain modalities are substituted with AI-generated counterparts. The ultimate goal is to enable data-related and task-related ablation studies through the complete explainability of StarNet, thus offering radiologists clear guidance on which imaging sequences contribute to the task, to what extent, and at which stages of the process. As an example, we applied the proposed architecture to the 2D slices extracted from 3D volumes acquired with multimodal magnetic resonance imaging (MRI), to assess: 1. The role of the used imaging modalities; 2. The change in role when the radiological task changes; 3. The effects of synthetic data on the process. The results are presented and discussed.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.535 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.268 Zit.
"Why Should I Trust You?"
2016 · 14.361 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.153 Zit.