Generative Multi-Adversarial Networks

Authors: Ishan Durugkar, Ian Gemp, Sridhar Mahadevan

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric.
Researcher Affiliation Academia Ishan Durugkar , Ian Gemp , Sridhar Mahadevan College of Information and Computer Sciences University of Massachusetts, Amherst Amherst, MA 01060, USA {idurugkar,imgemp,mahadeva}@cs.umass.edu
Pseudocode No The paper describes the methods using text and mathematical equations, but it does not contain a dedicated 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Code to reproduce experiments and plots is at https://github.com/iDurugkar/GMAN.
Open Datasets Yes We evaluate the aforementioned variations of GMAN on a variety of image generation tasks: MNIST (Le Cun et al. (1998)), CIFAR-10 (Krizhevsky (2009)) and Celeb A (Liu et al. (2015)).
Dataset Splits No The paper mentions training on MNIST, CIFAR-10, and Celeb A, but it does not explicitly provide details about the training, validation, and test dataset splits or how they were partitioned.
Hardware Specification Yes The code was written in Tensorflow (Abadi et al. (2016)) and run on Nvidia GTX 980 GPUs.
Software Dependencies No The paper mentions software like TensorFlow and Adam, but it does not provide specific version numbers for these software dependencies or any other libraries.
Experiment Setup Yes Specifics for the MNIST architecture and training are: Generator latent variables z U ( 1, 1)100 Generator convolution transpose layers: (4, 4, 128) , (8, 8, 64) , (16, 16, 32) , (32, 32, 1) Base Discriminator architecture: (32, 32, 1) , (16, 16, 32) , (8, 8, 64) , (4, 4, 128). Variants have either convolution 3 (4, 4, 128) removed or all the filter sizes are divided by 2 or 4. That is, (32, 32, 1) , (16, 16, 16) , (8, 8, 32) , (4, 4, 64) or (32, 32, 1) , (16, 16, 8) , (8, 8, 16) , (4, 4, 32). Re Lu activations for all the hidden units. Tanh activation at the output units of the generator. Sigmoid at the output of the Discriminator. Training was performed with Adam (Kingma & Ba (2014)) (lr = 2 10 4, β1 = 0.5). MNIST was trained for 20 epochs with a minibatch of size 100. Celeb A and CIFAR were trained over 24000 iterations with a minibatch of size 100.