A mean-field analysis of two-player zero-sum games

Authors: Carles Domingo-Enrich, Samy Jelassi, Arthur Mensch, Grant Rotskoff, Joan Bruna

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate numerically how both dynamics overcome the curse of dimensionality for finding MNE on synthetic games. On real data, we use WFR flows to train mixtures of GANs, that explicitly discover data clusters while maintaining good performance.
Researcher Affiliation Academia Carles Domingo-Enrich Courant Institute of Mathematical Sciences New York University New York, NY cd2754@nyu.edu Samy Jelassi Princeton University Princeton, NJ sjelassi@princeton.edu Arthur Mensch École Normale Supérieure Paris, France arthur.mensch@m4x.org Grant Rotskoff Courant Institute of Mathematical Sciences New York University New York, NY rotskoff@cims.nyu.edu Joan Bruna Courant Institute of Mathematical Sciences & Center for Data Science New York University New York, NY bruna@cims.nyu.edu
Pseudocode Yes Algorithm 1 Langevin Descent-Ascent (L-DA). ... Algorithm 2 Wasserstein-Fisher-Rao Descent-Ascent (WFR-DA).
Open Source Code Yes Code has been made available for reproducibility.
Open Datasets Yes We first set Pdata to be an 8-mode mixture of Gaussians in two dimensions. ... We train a mixture of ResNet generators on CIFAR10 and MNIST.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into train/validation/test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes We replace the position updates in Alg. 2 by extrapolated Adam steps (Gidel et al., 2019) to achieve faster convergence, and perform grid search over generator and discriminators learning rates. ... We use the original W-GAN loss, with weight cropping for the discriminators (fy(j))j.