Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Deep Learning-Based Image Segmentation on Multimodal Medical Imaging
424
Zitationen
5
Autoren
2019
Jahr
Abstract
Multi-modality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multi-modal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multi-modal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep Convolutional Neural Networks (CNN) to contour the lesions of soft tissue sarcomas using multi-modal images, including those from Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Positron Emission Tomography (PET). The network trained with multi-modal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e. fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e. voting). This study provides empirical guidance for the design and application of multi-modal image analysis.
Ähnliche Arbeiten
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
2018 · 6.447 Zit.
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
2014 · 6.372 Zit.
A Comprehensive Survey on Graph Neural Networks
2021 · 3.310 Zit.
Brain tumor segmentation with Deep Neural Networks
2016 · 3.207 Zit.
Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images
2016 · 2.634 Zit.