A Two-Step Computation of the Exact GAN Wasserstein Distance

Authors: Huidong Liu, Xianfeng GU, Dimitris Samaras

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results on synthetic data show that the our method computes the Wasserstein distance more accurately. Qualitative and quantitative results on MNIST, LSUN and CIFAR-10 datasets show that the proposed method is more efficient than stateof-the-art WGAN methods, and still produces images of comparable quality.
Researcher Affiliation Academia 1Department of Computer Science, Stony Brook University, New York, USA. Correspondence to: Huidong Liu <huidliu@cs.stonybrook.edu>, Xianfeng Gu <gu@cs.stonybrook.edu>, Dimitris Samaras <samaras@cs.stonybrook.edu>.
Pseudocode Yes Algorithm 1 WGAN-TS
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the described methodology.
Open Datasets Yes Results on the MNIST (Le Cun et al., 1998), LSUN (Zhang et al., 2015) and CIFAR-10 (Krizhevsky & Hinton, 2009) datasets show that WGAN-TS is comparable to WGAN-GP and SN-WD.
Dataset Splits No The paper mentions using training and test data in the context of GANs but does not specify any explicit training/validation/test dataset splits (e.g., percentages or counts) for the experiments.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper mentions using Adam and RMSProp optimizers, and the DCGAN architecture, but does not specify any software versions for libraries, frameworks, or programming languages (e.g., PyTorch version, Python version).
Experiment Setup Yes The batch size is set to 64 for all methods in all experiments. The dimension of the latent vector is set to 100 for all methods. The number of critic iterations nc for all methods is set to 5, except WGAN-TS where the optimization iteration number nr is set to 5 instead. We use RMSProp (Tieleman & Hinton, 2012) as the optimizer for critic and generator in the WGAN and Fisher GAN, and set the learning rate is to 5e-5. The weight clipping parameter c in the WGAN is set to 0.01. For the WGAN-GP and WGAN-TS, we use Adam as the optimizer. We set the learning rate to 1e-4, β1 = 0.5 and β2 = 0.999. λ in the WGAN-GP is set to 10. ρ in the Fisher GAN is set to 1e-6 as suggested (Mroueh & Sercu, 2017).