Improved Algorithms for Convex-Concave Minimax Optimization

Authors: Yuanhao Wang, Jian Li

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper studies minimax optimization problems minx maxy f(x, y), where f(x, y) is mx-strongly convex with respect to x, my-strongly concave with respect to y and (Lx, Lxy, Ly)-smooth. Zhang et al. [42] provided the following lower bound of the gradient complexity for any first-order method: Lx mx + L2xy mxmy + Ly my ln(1/ϵ) . This paper proposes a new algorithm with gradient complexity upper bound O q Lx mx + L Lxy my ln (1/ϵ) , where L = max{Lx, Lxy, Ly}.
Researcher Affiliation Academia Yuanhao Wang Computer Science Department Princeton University yuanhao@princeton.edu Jian Li Institute for Interdisciplinary Information Sciences Tsinghua University lijian83@mail.tsinghua.edu.cn
Pseudocode Yes Algorithm 1 Alternating Best Response (ABR), Algorithm 2 Accelerated Proximal Point Algorithm for Minimax Optimization, Algorithm 3 APPA-ABR, Algorithm 4 Proximal Best Response, Algorithm 5 RHSS(k) (Recursive Hermitian-skew-Hermitian Split)
Open Source Code No The paper does not provide any statement or link for open-source code.
Open Datasets No The paper is theoretical and does not mention using or providing access information for a public dataset for training.
Dataset Splits No The paper is theoretical and does not provide information about dataset splits for validation.
Hardware Specification No The paper is theoretical and does not mention any hardware specifications used for experiments.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies or versions.
Experiment Setup No The paper is theoretical and does not describe experimental setup details like hyperparameters or training configurations for empirical evaluation.