Multi-View Data Generation Without View Supervision

Authors: Mickael Chen, Ludovic Denoyer, Thierry Artières

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment it on four image datasets on which we demonstrate the effectiveness of the model and its ability to generalize.
Researcher Affiliation Collaboration Micka el Chen Sorbonne Universit e, CNRS, Laboratoire d Informatique de Paris 6, LIP6, F-75005, Paris, France mickael.chen@lip6.fr Ludovic Denoyer Sorbonne Universit e, CNRS, Laboratoire d Informatique de Paris 6, LIP6, F-75005, Paris, France Criteo Research ludovic.denoyer@lip6.fr Thierry Arti eres Aix Marseille Univ, Universit e de Toulon, CNRS, LIS, Marseille, France Ecole Centrale Marseille thierry.artiere@centrale-marseille.fr
Pseudocode No The paper describes the model architecture and objective functions but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Following the article, an implementation of our algorithms is freely available1. 1https://github.com/mickaelchen/GMV
Open Datasets Yes Celeb A (Liu et al. (2015)) 198791 3808 9999 178 1 19.9 35 3DChairs (Aubry et al. (2014)) 80600 5766 1300 93 62 62 62 MVC cloth (Liu et al. (2016)) 159128 2132 37004 495 4 4.3 7 102flowers (Nilsback & Zisserman (2008)) 8189 102 40 80.3 258
Dataset Splits No Table 1 lists 'train' and 'test' data splits for the datasets, but no explicit 'validation' split is mentioned with specific percentages or counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions using Adam optimizer and DCGAN implementation, but it does not provide specific version numbers for any software dependencies or libraries (e.g., Python, PyTorch, TensorFlow, specific GAN frameworks).
Experiment Setup Yes The images were rescaled to 3 64 64 tensors. The generator G and the discriminator D follow that of the DCGAN implementation proposed in Radford et al. (2015). Learning has been made using classical GAN learning techniques: we used Adam optimizer (Kingma & Ba (2014)) with batches of size 128. Following standard practice, learning rate in the GMV experiments are set to 1 10 3 of G and 2 10 4 for D. For the C-GMV experiments, learning rates are set to 5 10 5. The adversarial objectives are optimized by alternating gradient descent over the generator/encoder, and over the discriminator.