Provably convergent quasistatic dynamics for mean-field two-player zero-sum games
Authors: Chao Ma, Lexing Ying
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments show the effectiveness of QSLGD on synthetic examples and training mixture of GANs. In this section, we apply the quasistatic Langevin gradient descent method to several problems. 1-dimensional game on torus, Polynomial games on spheres, GANs. |
| Researcher Affiliation | Academia | Chao Ma, Lexing Ying Department of Mathematics Stanford University Stanford, CA 94305, USA {chaoma,lexing}@stanford.edu |
| Pseudocode | Yes | Algorithm 1: Quasistatic Langevin gradient descent method (QSLGD) |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that source code for the described methodology is publicly available. |
| Open Datasets | No | The paper mentions using 'synthetic examples' and training GANs to learn 'Gaussian mixtures', but it does not provide concrete access information (e.g., links, DOIs, citations to specific public datasets with attribution) for any publicly available or open dataset. |
| Dataset Splits | No | The paper mentions training models and using synthetic examples but does not specify data split percentages or sample counts for training, validation, or testing datasets to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as library names with version numbers, required to replicate the experiment. |
| Experiment Setup | Yes | In the experiments, all quasistatic methods take k0 = 1000 and k2 = 1, with different k1 shown in the legends. For each experiment, we conduct 300000, 150000, 60000, 30000 outer iterations for LGDA, QS2, QS5, and QS10, respectively. We train GANs with 5 generators and 5 discriminators, and take k0 = 100, k1 = 5, k2 = 1. The average squared distance can be reduced to 0.3 0.5 after 10000 iterations. The middle figures shows slightly better performance of the QSLGD when the step length η (both ηx and ηy) is small. |