Conditional Image Generation by Conditioning Variational Auto-Encoders

Authors: William Harvey, Saeid Naderiparizi, Frank Wood

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our approach on several conditional generation tasks in the image domain but focus in particular on stochastic image completion: the problem of inferring the posterior distribution over images given the observation of a subset of pixel values. ... This is supported empirically by results indicating that, not only is the visual quality of our image completions (see Fig. 1) close to the state-of-the-art (Zhao et al., 2021), but our coverage of the true posterior over image completions is superior to that of any of our baselines. 4 EXPERIMENTS
Researcher Affiliation Collaboration William Harvey, Saeid Naderiparizi & Frank Wood Department of Computer Science University of British Columbia Vancouver, Canada {wsgh,saeidnp,fwood}@cs.ubc.ca Frank Wood is also affiliated with the Montr eal Institute for Learning Algorithms (Mila) and Inverted AI.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes We release code for training IPA and IPA-R, code for using the trained artifacts to perform Bayesian experimental design and out-of-distribution detection, and various pretrained models1. 1https://github.com/plai-group/ipa
Open Datasets Yes We create an IPA image completion model based on the VD-VAE unconditional architecture (Child, 2020), and evaluate it for image completion on three datasets: CIFAR-10 (Krizhevsky et al., 2009), Image Net-64 (Deng et al., 2009), and FFHQ256 (Karras et al., 2019). And We experiment on the NIH Chest X-ray 14 dataset (Wang et al., 2017) at 256 256 resolution.
Dataset Splits No The paper references standard datasets like CIFAR-10, ImageNet-64, FFHQ-256, and NIH Chest X-ray 14, and mentions using 'test sets' from these. However, it does not provide specific details on the train/validation/test splits (e.g., percentages or sample counts for each split) used for its experiments.
Hardware Specification Yes CIFAR-10: GPUs V100, 2080 Ti... Image Net-64: GPUs V100, 2080 Ti... FFHQ-256: GPUs V100, 2080 Ti... Edges2Bags: GPUs 2080 Ti... Edges2Shoes: GPUs 2080 Ti... Chest X-ray: GPUs 2080 Ti, V100 and We trained the Image Net-32 VAE on a Ge Force RTX 2080 Ti for 14 days... We trained the Chest X-ray VAE on 4 V100 GPUs for about 5 days
Software Dependencies No The paper mentions 'Weights & Biases (Biewald, 2020)' as an experiment-tracking infrastructure but does not provide specific version numbers for any software dependencies, such as deep learning frameworks or libraries.
Experiment Setup Yes Most training hyperparameters were the same as those used by Child (2020) for unconditional training of the corresponding architectures. We report the significant differences in Table 3 and the following paragraph. Learning rates were selected with sweeps over three values, and the batch sizes selected were the largest compatible with the GPU s memory. And We train g to estimate these using a cross-entropy loss. ... for 32 000 iterations with a batch size of 32 and learning rate 1 10 5.