Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Variational image compression with a scale hyperprior
860
Zitationen
5
Autoren
2018
Jahr
Abstract
We describe an end-to-end trainable model for image compression based on variational autoencoders. The model incorporates a hyperprior to effectively capture spatial dependencies in the latent representation. This hyperprior relates to side information, a concept universal to virtually all modern image codecs, but largely unexplored in image compression using artificial neural networks (ANNs). Unlike existing autoencoder compression methods, our model trains a complex prior jointly with the underlying autoencoder. We demonstrate that this model leads to state-of-the-art image compression when measuring visual quality using the popular MS-SSIM index, and yields rate-distortion performance surpassing published ANN-based methods when evaluated using a more traditional metric based on squared error (PSNR). Furthermore, we provide a qualitative comparison of models trained for different distortion metrics.
Ähnliche Arbeiten
Deep learning
2015 · 79.818 Zit.
Learning Multiple Layers of Features from Tiny Images
2024 · 25.469 Zit.
GAN(Generative Adversarial Nets)
2017 · 21.792 Zit.
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks
2017 · 21.578 Zit.
SSD: Single Shot MultiBox Detector
2016 · 20.469 Zit.