Dual Discriminator Generative Adversarial Nets

Authors: Tu Nguyen, Trung Le, Hung Vu, Dinh Phung

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, Image Net), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN s variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to Image Net database.
Researcher Affiliation Academia Tu Dinh Nguyen, Trung Le, Hung Vu, Dinh Phung Deakin University, Geelong, Australia Centre for Pattern Recognition and Data Analytics {tu.nguyen, trung.l, hungv, dinh.phung}@deakin.edu.au
Pseudocode Yes We refer to the supplementary material for the pseudo-code of learning parameters for D2GAN.
Open Source Code Yes Our implementation is in Tensor Flow [1] and we have published a version for reference1. 1https://github.com/tund/D2GAN
Open Datasets Yes We conduct extensive experiments on one synthetic dataset and four real-world large-scale datasets (MNIST, CIFAR10, STL-10, Image Net) of very different nature.
Dataset Splits No The paper mentions 60,000 training images and 10,000 testing images for MNIST, but does not explicitly state a validation split or provide details for other datasets.
Hardware Specification No The paper does not provide specific details on the hardware (e.g., CPU/GPU models, memory) used for running the experiments, only mentioning the use of Tensor Flow.
Software Dependencies No Our implementation is in Tensor Flow [1]. The reference provides "Tensor Flow: Large-scale machine learning on heterogeneous systems, 2015" but no specific version number (e.g., TensorFlow 1.x or 2.x).
Experiment Setup Yes Common points are: i) discriminators outputs with softplus activations :f (x) = ln (1 + ex), i.e., positive version of Re LU; (ii) Adam optimizer [16] with learning rate 0.0002 and the first-order momentum 0.5; (iii) minibatch size of 64 samples for training both generator and discriminators; (iv) Leaky Re LU with the slope of 0.2; and (v) weights initialized from an isotropic Gaussian: N (0, 0.01)