Wasserstein Generative Adversarial Networks
Authors: Martin Arjovsky, Soumith Chintala, Léon Bottou
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 4, we empirically show that WGANs cure the main training problems of GANs. |
| Researcher Affiliation | Collaboration | 1Courant Institute of Mathematical Sciences, NY 2Facebook AI Research, NY. |
| Pseudocode | Yes | Algorithm 1 WGAN, our proposed algorithm. |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | We run experiments on image generation. The target distribution to learn is the LSUN-Bedrooms dataset (Yu et al., 2015) a collection of natural images of indoor bedrooms. |
| Dataset Splits | No | The paper mentions using a critic for evaluation and plotting learning curves, but does not provide specific details on validation dataset splits (e.g., percentages, sample counts, or methodologies for creating a validation set). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions optimizers like RMSProp and Adam, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the implementation. |
| Experiment Setup | Yes | All experiments in the paper used the default values α = 0.00005, c = 0.01, m = 64, ncritic = 5. ... We use the hyper-parameters specified in Algorithm 1 for all of our experiments. |