OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 06:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

VAMF-Net: multimodal fusion and multiscale attention for 3D brain tumor segmentation

2026·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Accurate segmentation of gliomas is crucial for diagnosis, treatment planning, and prognostic assessment. However, existing multimodal MRI segmentation methods are limited by inadequate information fusion, particularly when addressing significant tumor scale variations. To address these challenges, we present VAMFNet, a V-Net-based architecture comprising three coordinated modules. AMF performs voxel-wise, modalityadaptive fusion via a spatial attention map, enabling the network to assign dynamic weights to each MRI sequence. MFF aggregates multiscale context by employing parallel 3D dilated convolutions and cross-stage feature fusion, effectively handling large-scale variations. ConBlock3D + 3D-CBAM refines representations with channel and spatial attention and residual connections to sharpen boundaries. On the BraTS 2019 test set (with the model trained on BraTS 2020), VAMF-Net outperforms several advanced baselines (mean Dice: 0.910, HD95: 3.03, ET boundary HD95: 1.80), and ablation studies highlight the complementary contributions of the three modules. This study provides an efficient solution for multimodal medical image segmentation, with strong potential for clinical application.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Advanced Neural Network ApplicationsBrain Tumor Detection and ClassificationMedical Image Segmentation Techniques
Volltext beim Verlag öffnen