Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
1.402
Zitationen
2
Autoren
—
Jahr
Abstract
We present a representation learning method that learns features at multiple different levels of scale. Working within the unsupervised framework of denoising autoencoders, we observe that when the input is heavily corrupted during training, the network tends to learn coarse-grained features, whereas when the input is only slightly corrupted, the network tends to learn fine-grained features. This motivates the scheduled denoising autoencoder, which starts with a high level of noise that lowers as training progresses. We find that the resulting representation yields a significant boost on a later supervised task compared to the original input, or to a standard denoising autoencoder trained at a single noise level. After supervised fine-tuning our best model achieves the lowest ever reported error on the CIFAR-10 data set among permutation-invariant methods.
Ähnliche Arbeiten
Deep learning
2015 · 79.762 Zit.
Learning Multiple Layers of Features from Tiny Images
2024 · 25.469 Zit.
GAN(Generative Adversarial Nets)
2017 · 21.791 Zit.
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks
2017 · 21.568 Zit.
SSD: Single Shot MultiBox Detector
2016 · 20.460 Zit.