Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Some Theoretical Insights into Wasserstein GANs
Authors: Gรฉrard Biau, Maxime Sangnier, Ugo Tanielian
JMLR 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | These features are nally illustrated with experiments using both synthetic and real-world datasets. |
| Researcher Affiliation | Collaboration | G erard Biau EMAIL Laboratoire de Probabilit es, Statistique et Mod elisation Sorbonne Universit e 4 place Jussieu 75005 Paris, France Maxime Sangnier EMAIL Laboratoire de Probabilit es, Statistique et Mod elisation Sorbonne Universit e 4 place Jussieu 75005 Paris, France Ugo Tanielian EMAIL Laboratoire de Probabilit es, Statistique et Mod elisation & Criteo AI Lab Criteo AI Lab 32 rue Blanche 75009 Paris, France |
| Pseudocode | No | The paper describes mathematical formulations and theoretical properties of WGANs but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using a "Python package by Flamary and Courty (2017)" which refers to a third-party library, but it does not provide any statement or link for the source code of the methodology developed in this paper by the authors. |
| Open Datasets | Yes | Our goal in this subsection is to illustrate (18) by running a set of experiments on synthetic datasets. The true probability measure is assumed to be a mixture of bivariate Gaussian distributions with either 1, 4, or 9 components. In this subsection, we further illustrate the impact of the generator s and the discriminator s capacities on two high-dimensional datasets, namely MNIST (Le Cun et al., 1998) and Fashion-MNIST (Xiao et al., 2017). |
| Dataset Splits | Yes | Both datasets have a training set of 60,000 examples. ... In this second experiment, we consider the more realistic situation where we have at hand nite samples X1, . . . , Xn drawn from (n = 5000). |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware (like CPU, GPU models, or cloud computing instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions "using the Python package by Flamary and Courty (2017)" but does not provide specific version numbers for Python or the library itself, which is required for reproducibility. |
| Experiment Setup | No | The paper discusses the architectures of generators and discriminators (e.g., "{Gp : p = 2, 3, 5, 7}", "width of the hidden layers is kept constant equal to 20", "rectier ... and Group Sort activation functions"). However, it does not provide specific hyperparameters such as learning rates, batch sizes, number of epochs, or optimizer configurations, which are crucial for reproducing the experimental setup. |