Approximating Nash Equilibria in Normal-Form Games via Stochastic Optimization
Authors: Ian Gemp, Luke Marris, Georgios Piliouras
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We complement our theoretical analysis with experiments demonstrating that stochastic gradient descent can outperform previous state-of-the-art approaches. (Abstract) |
| Researcher Affiliation | Industry | Ian Gemp Deep Mind London, UK imgemp@google.com; Luke Marris Deep Mind London, UK marris@google.com; Georgios Piliouras Deep Mind London, UK gpil@google.com |
| Pseudocode | No | The paper describes algorithms such as Stochastic Gradient Descent and X-armed bandits, but it does not include any formal pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper mentions using existing open-source frameworks like 'Open Spiel (Lanctot et al., 2019)' and 'GAMUT (Nudelman et al., 2004)' for their experiments, but it does not provide a link or explicit statement about releasing the source code for their own proposed methodology. |
| Open Datasets | Yes | The games examined in Figure 3 were all taken from (Gemp et al., 2022). Each is available via open source implementations in Open Spiel (Lanctot et al., 2019) or GAMUT (Nudelman et al., 2004). (Appendix C.4) |
| Dataset Splits | No | The paper does not provide explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or specific predefined splits) for its experiments on games. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., CPU, GPU models, memory, or cloud computing resources) used for running the experiments. |
| Software Dependencies | No | The paper mentions software tools like 'gambit library (Mc Kelvey et al., 2016)', 'Open Spiel (Lanctot et al., 2019)', and 'GAMUT (Nudelman et al., 2004)'. However, it does not provide specific version numbers for these or other ancillary software components, such as programming languages or deep learning frameworks. |
| Experiment Setup | Yes | For each of the experiments, we sweep over learning rates in log-space from 10 3 to 102 in increments of 1. We also consider whether to run SGD with the projected-gradient and whether to constrain iterates to the simplex via Euclidean projection or entropic mirror descent (Beck and Teboulle, 2003). (Appendix C.4) |