Lipschitz Generative Adversarial Nets

Authors: Zhiming Zhou, Jiadong Liang, Yuxuan Song, Lantao Yu, Hongwei Wang, Weinan Zhang, Yong Yu, Zhihua Zhang

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we empirically study the gradient uninformativeness problem and the performance of various objectives of Lipschitz GANs. The results in terms of Inception Score (IS) (Salimans et al., 2016) and Frechet Inception Distance (FID) (Heusel et al., 2017) are presented in Table 2.
Researcher Affiliation Academia 1Shanghai Jiao Tong University 2Peking University 3Stanford University.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The anonymous code is provided in the supplemental material.
Open Datasets Yes We use the CIFAR-10 training set. Objective CIFAR-10 Tiny Image Net IS FID IS FID x 7.68 0.03 18.35 0.12 8.66 0.04 16.47 0.04
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits needed to reproduce the experiment.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes For all experiments, we adopt the network structures and hyper-parameter setting from (Gulrajani et al., 2017), where WGAN-GP in our implementation achieves IS 7.71 0.03 and FID 18.86 0.13 on CIFAR-10. We use Max GP for all experiments and search the best λ in [0.01, 0.1, 1.0, 10.0]. We use 200,000 iterations for better convergence and use 500k samples to evaluate IS and FID for preferable stability.