Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Joint Sequence Learning and Cross-Modality Convolution for 3D Biomedical Segmentation
186
Zitationen
4
Autoren
2017
Jahr
Abstract
Deep learning models such as convolutional neural network have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels, which ignores the correlations among them. To leverage the multi-modalities, we propose a deep convolution encoder-decoder structure with fusion layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM (convLSTM) to model a sequence of 2D slices, and jointly learn the multi-modalities and convLSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two phase training to handle the label imbalance. Experimental results on BRATS-2015 [13] show that our method outperforms state-of-the-art biomedical segmentation approaches.
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.972 Zit.
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation
2020 · 8.126 Zit.
Calculation of average PSNR differences between RD-curves
2001 · 4.093 Zit.
Magnetic Resonance Classification of Lumbar Intervertebral Disc Degeneration
2001 · 3.936 Zit.
Vertebral fracture assessment using a semiquantitative technique
1993 · 3.631 Zit.