A Catalyst Framework for Minimax Optimization

Authors: Junchi Yang, Siqi Zhang, Negar Kiyavash, Niao He

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We carry out several numerical experiments showcasing the superiority of the Catalyst framework in practice.
Researcher Affiliation Academia Junchi Yang UIUC junchiy2@illinois.edu Siqi Zhang UIUC siqiz4@illinois.edu Negar Kiyavash EPFL negar.kiyavash@epfl.ch Niao He UIUC & ETH Zurich niao.he@inf.ethz.ch
Pseudocode Yes Algorithm 1 Catalyst for SC-C Minimax Optimization
Open Source Code No No explicit statement or link providing access to the authors' source code for the methodology described in the paper.
Open Datasets No We generate two datasets with (1) β = 1 and σ0 R1000 uniformly from [0, 100]1000, (2) β = 1 and σ0 R500 uniformly from [0, 10]500.
Dataset Splits No The paper describes generating datasets but does not provide specific details on training, validation, or test splits, or reference any standard predefined splits.
Hardware Specification No No specific hardware details (like GPU models, CPU types, or cloud instance specifications) used for running the experiments are mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., library or solver names with version numbers) are mentioned in the paper.
Experiment Setup Yes In Figure 1, we apply the same stepsizes to EG and subroutine in Catalyst-EG, and we compare their convergence results with stepsizes from small to large. In Figure 2, we compare four algorithms: extragradient (EG), SVRG, Catalyst-EG, Catalyst-SVRG with besttuned stepsizes... In Catalyst, we use xt PX (xt β xf(xt, yt)) /β + yt PY(yt + β yf(xt, yt)) /β as stopping criterion for subproblem, which is discussed in Section 2. We control the subroutine accuracy ϵ(t) as max{c/t8, ϵ}, where c is a constant and ϵ is a prefixed threshold.