Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AdaGAN: Boosting Generative Models
147
Zitationen
5
Autoren
2017
Jahr
Abstract
Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes.
Ähnliche Arbeiten
Deep learning
2015 · 80.491 Zit.
Learning Multiple Layers of Features from Tiny Images
2024 · 25.472 Zit.
GAN(Generative Adversarial Nets)
2017 · 21.794 Zit.
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks
2017 · 21.737 Zit.
SSD: Single Shot MultiBox Detector
2016 · 20.680 Zit.