Collaborative Sampling in Generative Adversarial Networks

Authors: Yuejiang Liu, Parth Kothari, Alexandre Alahi4948-4956

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on synthetic and image datasets, we demonstrate that our proposed method can improve generated samples both quantitatively and qualitatively, offering a new degree of freedom in GAN sampling.
Researcher Affiliation Academia Yuejiang Liu, Parth Kothari, Alexandre Alahi Ecole Polytechnique F ed erale de Lausanne (EPFL), Switzerland
Pseudocode Yes Algorithm 1 Collaborative Sampling; Algorithm 2 Discriminator Shaping
Open Source Code Yes Code is available online1. 1https://github.com/vita-epfl/collaborative-gan-sampling
Open Datasets Yes We first evaluate our collaborative sampling scheme on a synthetic 2D dataset, which comprises of an imbalanced mixture of 8 Gaussians... We use the standard DCGAN (Radford, Metz, and Chintala 2015) for modelling the CIFAR10 (Krizhevsky 2009) and the Celeb A (Liu et al. 2015) datasets, and the SAGAN (Zhang et al. 2019) for modelling Image Net (Deng et al. 2009) at 128 x 128 resolution... Here, we use the original NS-GAN (Goodfellow et al. 2014) as a baseline and apply our collaborative sampling scheme for 20 refinement steps with a step size of 0.1.
Dataset Splits Yes We first evaluate our collaborative sampling scheme on a synthetic 2D dataset... We use the standard DCGAN (Radford, Metz, and Chintala 2015) for modelling the CIFAR10 (Krizhevsky 2009) and the Celeb A (Liu et al. 2015) datasets, and the SAGAN (Zhang et al. 2019) for modelling Image Net (Deng et al. 2009) at 128 x 128 resolution. For sample refinement, we conduct a maximum of 50 refinement steps with a step size of 0.1 in a middle layer of the generator for the DCGAN and 16 updates with a step size of 0.5 for the SAGAN. Performance is quantitatively evaluated using the Inception Score (IS) (Salimans et al. 2016) and the Fréchet Inception Distance (FID) (Heusel et al. 2017) on 50k images. These datasets (CIFAR10, Celeb A, ImageNet, MNIST) are standard benchmarks with well-defined splits commonly used for training and evaluation.
Hardware Specification No The paper does not explicitly describe the specific hardware used for running the experiments (e.g., specific GPU or CPU models, memory details).
Software Dependencies No The paper mentions using specific models (e.g., DCGAN, SAGAN, Cycle GAN) but does not provide specific version numbers for underlying software libraries, frameworks (like PyTorch, TensorFlow), or programming languages.
Experiment Setup Yes We shape the discriminator for 5k additional iterations after terminating the standard GAN training and conduct a maximum 50 sample refinement steps in the data space with a step size of 0.1... we conduct a maximum of 50 refinement steps with a step size of 0.1 in a middle layer of the generator for the DCGAN and 16 updates with a step size of 0.5 for the SAGAN... we apply our collaborative sampling scheme for 20 refinement steps with a step size of 0.1.