Evaluating Lossy Compression Rates of Deep Generative Models

Authors: Sicong Huang, Alireza Makhzani, Yanshuai Cao, Roger Grosse

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate lossy compression rates of VAEs, GANs, and adversarial autoencoders (AAEs) on the MNIST and CIFAR10 datasets.
Researcher Affiliation Collaboration 1University of Toronto 2Vector Institute for Artificial Intelligence 3Borealis AI.
Pseudocode No The paper describes algorithms (e.g., Annealed Importance Sampling) using mathematical formulations but does not include a distinct pseudocode block or algorithm listing.
Open Source Code Yes The code for reproducing the experiments can be found at https://github.com/Borealis AI/rate distortion and https://github.com/huangsicong/rate distortion.
Open Datasets Yes We evaluate lossy compression rates of VAEs, GANs, and adversarial autoencoders (AAEs) on the MNIST and CIFAR10 datasets.
Dataset Splits No The paper mentions using MNIST and CIFAR-10 datasets for training and evaluation but does not explicitly provide specific percentages, counts, or methodologies for dataset splits (e.g., train, validation, test splits) beyond referencing the standard datasets.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models, or cloud computing instance types.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments (e.g., 'Python 3.8, PyTorch 1.9, and CUDA 11.1').
Experiment Setup Yes For the GAN experiments on MNIST (Fig. 4a), the label deep corresponds to three hidden layers of size 1024, and the label shallow corresponds to one hidden layer of size 1024. We trained shallow and deep GANs with Gradient Penalty (GAN-GP) (Gulrajani et al., 2017) with the code size d {2, 5, 10, 100} on MNIST.