Deep Generative Models for Distribution-Preserving Lossy Compression

Authors: Michael Tschannen, Eirikur Agustsson, Mario Lucic

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present an extensive empirical evaluation of the proposed approach on two standard GAN data sets, Celeb A [19] and LSUN bedrooms [20], realizing the first system that effectively solves the DPLC problem.
Researcher Affiliation Collaboration Michael Tschannen ETH Zürich michaelt@nari.ee.ethz.ch Eirikur Agustsson Google AI Perception eirikur@google.com Mario Lucic Google Brain lucic@google.com
Pseudocode No The paper references algorithms from external works (e.g., 'WGAN algorithm [16, Algorithm 1]'), but does not include any pseudocode or algorithm blocks within its own text.
Open Source Code Yes Code is available at https://github.com/mitscha/dplc.
Open Datasets Yes We present an extensive empirical evaluation of the proposed approach on two standard GAN data sets, Celeb A [19] and LSUN bedrooms [20], both downscaled to 64 × 64 resolution.
Dataset Splits No The paper mentions a 'testing set of 10k samples held out form the respective training set', but does not specify a separate validation set or explicit training/validation/test split percentages.
Hardware Specification No The paper does not specify the hardware (e.g., GPU model, CPU type, memory) used for running the experiments.
Software Dependencies No The paper mentions software components like 'Adam optimizer [33]', 'DCGAN [30]', 'WGAN [16]', 'WAE [17]', and 'WGAN-GP [28]' but does not provide specific version numbers for these or other libraries/frameworks.
Experiment Setup Yes We set m = 128, n = 2 for Celeb A, and m = 512, n = 4 for the LSUN bedrooms data set. [...] To train G by means of WAE-MMD and WGAN-GP we use the training parameters form [17] and [28], respectively. For Wasserstein++, we set γ in (11) to 2.5 × 10−5 for Celeb A and to 10−4 for LSUN. Further, we use the same training parameters to solve (8) as for WAE-MMD. Thereby, to compensate for the increase in the reconstruction loss with decreasing rate, we adjust the coefficient of the MMD penalty, λMMD (see Appendix C), proportionally as a function of the reconstruction loss of the CAE baseline, i.e., λMMD(R) = const. · MSECAE(R).