Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Multimodal Neuroimaging Feature Learning With Multimodal Stacked Deep Polynomial Networks for Diagnosis of Alzheimer's Disease
441
Zitationen
5
Autoren
2017
Jahr
Abstract
The accurate diagnosis of Alzheimer's disease (AD) and its early stage, i.e., mild cognitive impairment, is essential for timely treatment and possible delay of AD. Fusion of multimodal neuroimaging data, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), has shown its effectiveness for AD diagnosis. The deep polynomial networks (DPN) is a recently proposed deep learning algorithm, which performs well on both large-scale and small-size datasets. In this study, a multimodal stacked DPN (MM-SDPN) algorithm, which MM-SDPN consists of two-stage SDPNs, is proposed to fuse and learn feature representation from multimodal neuroimaging data for AD diagnosis. Specifically speaking, two SDPNs are first used to learn high-level features of MRI and PET, respectively, which are then fed to another SDPN to fuse multimodal neuroimaging information. The proposed MM-SDPN algorithm is applied to the ADNI dataset to conduct both binary classification and multiclass classification tasks. Experimental results indicate that MM-SDPN is superior over the state-of-the-art multimodal feature-learning-based algorithms for AD diagnosis.
Ähnliche Arbeiten
ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design
2018 · 6.429 Zit.
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
2014 · 6.357 Zit.
A Comprehensive Survey on Graph Neural Networks
2021 · 3.310 Zit.
Brain tumor segmentation with Deep Neural Networks
2016 · 3.204 Zit.
Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images
2016 · 2.630 Zit.