Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games
Authors: Tianyi Lin, Zhengyuan Zhou, Panayotis Mertikopoulos, Michael Jordan
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we consider multi-agent learning via online gradient descent in a class of games called -cocoercive games, a fairly broad class of games that admits many Nash equilibria and that properly includes unconstrained strongly monotone games. We characterize the finite-time lastiterate convergence rate for joint OGD learning on -cocoercive games; further, building on this result, we develop a fully adaptive OGD learning algorithm that does not require any knowledge of problem parameter (e.g. cocoercive constant ) and show, via a novel double-stopping time technique, that this adaptive algorithm achieves same finite-time last-iterate convergence rate as nonadaptive counterpart. Subsequently, we extend OGD learning to the noisy gradient feedback case and establish last-iterate convergence results first qualitative almost sure convergence, then quantitative finite-time convergence rates all under non-decreasing step-sizes. To our knowledge, we provide the first set of results that fill in several gaps of the existing multi-agent online learning literature, where three aspects finite-time convergence rates, non-decreasing step-sizes, and fully adaptive algorithms have been unexplored before. |
| Researcher Affiliation | Collaboration | 1Department of Industrial Engineering and Operations Research, UC Berkeley 2Stern School of Business, New York University and IBM Research 3Univ. Grenoble Alpes, CNRS, Inria, LIG, 38000 Grenoble and Criteo AI lab 4Department of Statistics and Electrical Engineering and Computer Science, UC Berkeley. |
| Pseudocode | Yes | Algorithm 1 Adaptive Online Gradient Descent. Algorithm 2 Adaptive Online Gradient Descent with Noisy Feedback Information. |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the methodology described. |
| Open Datasets | No | The paper focuses on theoretical analysis and algorithm development for multi-agent learning in games, not empirical studies using datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not involve dataset splits (training, validation, test) for empirical model validation. |
| Hardware Specification | No | The paper is theoretical and does not describe any specific hardware used for running experiments. |
| Software Dependencies | No | The paper is theoretical and focuses on mathematical proofs and algorithm design, thus it does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical, presenting algorithms and proofs. It does not describe an empirical experimental setup with hyperparameters or system-level training settings. |