Linear Last-iterate Convergence in Constrained Saddle-point Optimization

Authors: Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, Haipeng Luo

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we also provide experimental results to support our theory.
Researcher Affiliation Academia Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, Haipeng Luo University of Southern California {chenyu.wei,leechung,mengxiao.zhang,haipengl}@usc.edu
Pseudocode No The paper describes the Optimistic Gradient Descent Ascent (OGDA) and Optimistic Multiplicative Weights Update (OMWU) algorithms using mathematical equations and iterative steps (e.g., 'xt = ΠX bxt − η∇xf(xt−1, yt−1)'), but it does not present this as a formal pseudocode block or explicitly label it as 'Algorithm'.
Open Source Code No The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes generating synthetic data for experiments (e.g., 'generate a random matrix with each entry Gij drawn uniformly at random from [-1, 1]') and defines specific functional forms for f(x,y) and constraint sets (e.g., 'X = Y = {(a, b), 0 <= a, b <= 1, a + b = 1}'). However, it does not provide access information (link, DOI, citation with authors/year) for any publicly available or open dataset.
Dataset Splits No The paper describes experiments on synthetically generated data and does not specify any explicit training, validation, or test dataset splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper does not provide any specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9') that would be needed to replicate the experiment environment.
Experiment Setup Yes We compare the performances of OGDA and OMWU. For both algorithms, we choose a series of different learning rates and compare their performances, as shown in Figure 1.