What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization?

Authors: Chi Jin, Praneeth Netrapalli, Michael Jordan

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The main contribution of this paper is to propose a proper mathematical definition of local optimality for this sequential setting local minimax, as well as to present its properties and existence results. Finally, we establish a strong connection to a basic local search algorithm gradient descent ascent (GDA): under mild conditions, all stable limit points of GDA are exactly local minimax points up to some degenerate points.
Researcher Affiliation Collaboration Chi Jin 1 Praneeth Netrapalli 2 Michael I. Jordan 3 1Princeton University 2Microsoft Research, India 3University of California, Berkeley.
Pseudocode Yes Algorithm 1 Gradient Descent Ascent (γ-GDA)... Algorithm 2 Gradient Descent with Max-Oracle
Open Source Code No The paper does not contain any explicit statements or links indicating that source code for the described methodology is publicly available.
Open Datasets No The paper is theoretical and does not describe experiments using specific datasets, thus no information on public dataset availability for training is provided.
Dataset Splits No The paper does not conduct experiments, so there is no mention of training/validation/test dataset splits.
Hardware Specification No The paper is theoretical and does not report on computational experiments, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not report on computational experiments, therefore no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not report on computational experiments, therefore no experimental setup details like hyperparameters or training configurations are provided.