Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
Authors: Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Georgios Piliouras
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, these results are robust to discrete and stochastic updates using sampling as shown in Figure 4. |
| Researcher Affiliation | Academia | Lampros Flokas Department of Computer Science Columbia University New York, NY 10025 lamflokas@cs.columbia.edu Emmanouil V. Vlatakis-Gkaragkounis Department of Computer Science Columbia University New York, NY 10025 emvlatakis@cs.columbia.edu Georgios Piliouras Singapore University of Technology & Design georgios.piliouras@sutd.edu.sg |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 3.a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] |
| Open Datasets | No | The paper mentions using 'fully mixed distribution' and 'Gaussian distributions' for experiments, implying synthetic or internally generated data without providing specific access information (e.g., URL, DOI, specific citation with author/year) for a publicly available dataset. |
| Dataset Splits | No | The paper does not specify exact percentages or sample counts for training, validation, or test dataset splits. |
| Hardware Specification | No | 3.d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No] The presented experiments are used for illustrative purposes and only to validate the theoretical findings which are the core results of this work. |
| Software Dependencies | No | The paper mentions 'Stochastic GDA' but does not specify any software dependencies (e.g., libraries, frameworks) with version numbers. |
| Experiment Setup | No | While the checklist states that training details were specified, the main text does not contain specific hyperparameter values (e.g., learning rate, batch size, epochs) or detailed system-level training configurations. It vaguely mentions 'small learning rates' in a figure caption but no concrete values. |