Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
VAMF-Net: multimodal fusion and multiscale attention for 3D brain tumor segmentation
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Accurate segmentation of gliomas is crucial for diagnosis, treatment planning, and prognostic assessment. However, existing multimodal MRI segmentation methods are limited by inadequate information fusion, particularly when addressing significant tumor scale variations. To address these challenges, we present VAMFNet, a V-Net-based architecture comprising three coordinated modules. AMF performs voxel-wise, modalityadaptive fusion via a spatial attention map, enabling the network to assign dynamic weights to each MRI sequence. MFF aggregates multiscale context by employing parallel 3D dilated convolutions and cross-stage feature fusion, effectively handling large-scale variations. ConBlock3D + 3D-CBAM refines representations with channel and spatial attention and residual connections to sharpen boundaries. On the BraTS 2019 test set (with the model trained on BraTS 2020), VAMF-Net outperforms several advanced baselines (mean Dice: 0.910, HD95: 3.03, ET boundary HD95: 1.80), and ablation studies highlight the complementary contributions of the three modules. This study provides an efficient solution for multimodal medical image segmentation, with strong potential for clinical application.
Ähnliche Arbeiten
Deep Residual Learning for Image Recognition
2016 · 215.868 Zit.
U-Net: Convolutional Networks for Biomedical Image Segmentation
2015 · 85.833 Zit.
ImageNet classification with deep convolutional neural networks
2017 · 75.547 Zit.
Very Deep Convolutional Networks for Large-Scale Image Recognition
2014 · 75.404 Zit.
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
2016 · 52.596 Zit.