Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A mean-field analysis of two-player zero-sum games
Authors: Carles Domingo-Enrich, Samy Jelassi, Arthur Mensch, Grant Rotskoff, Joan Bruna
NeurIPS 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate numerically how both dynamics overcome the curse of dimensionality for finding MNE on synthetic games. On real data, we use WFR flows to train mixtures of GANs, that explicitly discover data clusters while maintaining good performance. |
| Researcher Affiliation | Academia | Carles Domingo-Enrich Courant Institute of Mathematical Sciences New York University New York, NY EMAIL Samy Jelassi Princeton University Princeton, NJ EMAIL Arthur Mensch École Normale Supérieure Paris, France EMAIL Grant Rotskoff Courant Institute of Mathematical Sciences New York University New York, NY EMAIL Joan Bruna Courant Institute of Mathematical Sciences & Center for Data Science New York University New York, NY EMAIL |
| Pseudocode | Yes | Algorithm 1 Langevin Descent-Ascent (L-DA). ... Algorithm 2 Wasserstein-Fisher-Rao Descent-Ascent (WFR-DA). |
| Open Source Code | Yes | Code has been made available for reproducibility. |
| Open Datasets | Yes | We first set Pdata to be an 8-mode mixture of Gaussians in two dimensions. ... We train a mixture of ResNet generators on CIFAR10 and MNIST. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into train/validation/test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | We replace the position updates in Alg. 2 by extrapolated Adam steps (Gidel et al., 2019) to achieve faster convergence, and perform grid search over generator and discriminators learning rates. ... We use the original W-GAN loss, with weight cropping for the discriminators (fy(j))j. |