Emerging Convolutions for Generative Normalizing Flows

Authors: Emiel Hoogeboom, Rianne Van Den Berg, Max Welling

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that the flexibility of d d convolutions significantly improves the performance of generative flow models on galaxy images, CIFAR10 and Image Net.
Researcher Affiliation Academia 1Uv A-Bosch Delta Lab, University of Amsterdam, Netherlands 2University of Amsterdam, Netherlands 3Canadian Institute for Advanced Research (CIFAR). Correspondence to: Emiel Hoogeboom <e.hoogeboom@uva.nl>.
Pseudocode No The paper describes methods in text and equations but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at: github.com/ehoogeboom/emerging.
Open Datasets Yes CIFAR10 (Krizhevsky & Hinton, 2009) and Image Net (Russakovsky et al., 2015).
Dataset Splits No The paper mentions using datasets like CIFAR10 and ImageNet, but does not explicitly provide details on train/validation/test splits (e.g., percentages, sample counts, or explicit reference to a specific standard split used) to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (like GPU/CPU models or specific machine configurations) used for running its experiments.
Software Dependencies No The paper mentions software like Tensorflow and Cython, but does not provide specific version numbers for these or any other ancillary software components used in the experiments.
Experiment Setup No The paper states it uses the architecture from Kingma & Dhariwal (2018) and varies the number of flows per level (D=8, D=4), but it does not provide specific hyperparameters such as learning rates, batch sizes, optimizers, or training schedules in the main text.