Optimizing the Latent Space of Generative Networks

Authors: Piotr Bojanowski, Armand Joulin, David Lopez-Pas, Arthur Szlam

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Throughout a variety of experiments, we show that GLO enjoys many of the desirable properties of GANs: synthesizing visually-appealing samples, interpolating meaningfully between samples, and performing linear arithmetic with noise vectors; all of this without the adversarial optimization scheme.
Researcher Affiliation Industry 1Facebook AI Research. Correspondence to: Piotr Bojanowski <bojanowski@fb.com>.
Pseudocode No The paper describes the optimization process verbally and with mathematical equations but does not include structured pseudocode or an algorithm block.
Open Source Code No No explicit statement or link providing access to the source code for the described methodology was found in the paper.
Open Datasets Yes We carry out our experiments on MNIST (http://yann.lecun.com/exdb/mnist/), SVHN (http://ufldl.stanford.edu/housenumbers/) as well as more challenging datasets such as Celeb A (http://mmlab.ie.cuhk.edu.hk/projects/ Celeb A.html) and LSUNbedroom (http://lsun.cs.princeton.edu/2017/).
Dataset Splits No The paper specifies training on the complement of a 1/32 test set, but does not explicitly mention a separate validation split or how hyperparameters were tuned.
Hardware Specification No The paper does not specify any hardware details such as CPU, GPU models, or memory used for running the experiments.
Software Dependencies No The paper mentions using DCGAN for generator architecture and Stochastic Gradient Descent (SGD) for optimization, but does not provide specific version numbers for software dependencies or libraries.
Experiment Setup Yes We use Stochastic Gradient Descent (SGD) to optimize both θ and z, setting the learning rate for θ at 1 and the learning rate of z at 10. After each update, the noise vectors z are projected to the unit ℓ2 Sphere. In the sequel, we initialize the random vectors of GLO using a Gaussian distribution (for the Celeb A dataset) or the top d principal components (for the LSUN dataset). We use the ℓ2 + Lap1 loss for all the experiments but MNIST where we use an MSE loss. We use 32 dimensions for MNIST, 64 dimensions for SVHN and 256 dimensions for Celeb A and LSUN.