Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models

Authors: Jesse Engel, Matthew Hoffman, Adam Roberts

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For images, we use the MNIST digits dataset (Le Cun & Cortes, 2010) and the Large-scale Celeb Faces Attributes (Celeb A) dataset (Liu et al., 2015). Figure 4 demonstrates the quality of conditional samples from a CGAN actor-critic pair and the effect of the distance penalty, which constrains generation to be closer to the prior sample, maintaining similarity between samples with different attributes.
Researcher Affiliation Industry Jesse Engel Google Brain San Francisco, CA, USA Matthew D. Hoffman Google Inc. San Francisco, CA, USA Adam Roberts Google Brain San Francisco, CA, USA
Pseudocode No The paper describes the system components and procedures in text and diagrams (Figures 1, 12, 13) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper provides a link for synthesized audio samples ('https://goo.gl/ou ULt9') but does not include any explicit statement about making the source code for the described methodology publicly available, nor does it provide a link to a code repository.
Open Datasets Yes For images, we use the MNIST digits dataset (Le Cun & Cortes, 2010) and the Large-scale Celeb Faces Attributes (Celeb A) dataset (Liu et al., 2015).
Dataset Splits No The paper mentions 'validation set' in the caption for Figure 3 and 'test data' in Table 1, but does not provide specific percentages or counts for training, validation, and test dataset splits needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using the 'Adam optimizer' and refers to the training procedure of Gulrajani et al. (2017), but it does not specify version numbers for any key software components or libraries like Python, TensorFlow, PyTorch, or CUDA.
Experiment Setup Yes All encoders, decoders, and classifiers are trained with the Adam optimizer (Kingma & Ba, 2015), with learning rate = 3e-4, β1 = 0.9, and β2 = 0.999. To train Dreal(z), Dattr(z) and G(z) we follow the training procedure of Gulrajani et al. (2017), applying a gradient penalty of 10, training D and G in a 10:1 step ratio, and use the Adam optimizer with learning rate = 3e-4, β1 = 0.0, and β2 = 0.9.