Learning to Generate Samples from Noise through Infusion Training

Authors: Florian Bordes, Sina Honari, Pascal Vincent

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show competitive results compared to the samples generated with a basic Generative Adversarial Net.
Researcher Affiliation Academia Florian Bordes, Sina Honari, Pascal Vincent Montreal Institute for Learning Algorithms (MILA) D epartement d Informatique et de Recherche Op erationnelle Universit e de Montr eal Montr eal, Qu ebec, Canada {firstname.lastname@umontreal.ca}
Pseudocode No The paper describes the procedures in text and through figures illustrating the chains, but it does not include any formal 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not provide any explicit statement about releasing its source code or a link to a code repository for the implemented methodology.
Open Datasets Yes We trained such a model using our infusion training procedure on MNIST (Le Cun & Cortes, 1998), Toronto Face Database (Susskind et al., 2010), CIFAR-10 (Krizhevsky & Hinton, 2009), and Celeb A (Liu et al., 2015).
Dataset Splits Yes The data set D is supposed split into training, validation and test subsets Dtrain, Dvalid, Dtest.
Hardware Specification No The paper mentions 'Compute Canada and Nvidia for their computation resources' in the acknowledgments. However, it does not specify any particular GPU or CPU models, memory details, or specific hardware configurations used for running the experiments.
Software Dependencies No The paper acknowledges 'Theano (Theano Development Team, 2016)' but does not provide specific version numbers for Theano or any other software libraries or dependencies crucial for reproducing the experiments.
Experiment Setup Yes The network trained on MNIST and TFD is a MLP composed of two fully connected layers with 1200 units using batch-normalization (Ioffe & Szegedy, 2015) 5. The network trained on CIFAR-10 is based on the same generator as the GANs of Salimans et al. (2016), i.e. one fully connected layer followed by three transposed convolutions. ... For each experiment, we trained the network on 15 steps of denoising with an increasing infusion rate of 1% (ω = 0.01, α (0) = 0), except on CIFAR-10 where we use an increasing infusion rate of 2% (ω = 0.02, α (0) = 0) on 20 steps.