Adaptive and Universal Algorithms for Variational Inequalities with Optimal Convergence

Authors: Alina Ene, Huy Lê Nguyễn6559-6567

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experimental Evaluation In this section, we give experimental results on bilinear saddle point instances. We provide additional experimental results, including an experiment on training generative adversarial networks, in the full version.
Researcher Affiliation Academia Alina Ene 1, Huy L. Nguyen 2 1Boston University, 2Northeastern University aene@bu.edu, hu.nguyen@northeastern.edu
Pseudocode Yes Algorithm 1: Ada PEG algorithm for bounded domains X. Let x0 = z0 X, γ0 0, η > 0. For t = 1, . . . , T, update:
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the methodology described.
Open Datasets No Instances: We consider bilinear saddle point problems minu U maxv V f(u, v), where f(u, v) = 1 i=1 u A(i)v and A(i) Rd d for each i [n]. The strong solution is x = (u , v ) = 0. Each matrix A(i) was generated by first sampling a diagonal matrix with entries drawn from the Uniform([ 10, 10]) distribution, and then applying a random rotation drawn from the Haar distribution.
Dataset Splits No The paper describes the generation of synthetic instances and general experimental settings but does not explicitly specify training, validation, or test dataset splits.
Hardware Specification No The paper does not provide specific details about the hardware used for running its experiments.
Software Dependencies No The paper mentions general software like PyTorch in the context of common practice but does not provide specific version numbers for software dependencies used in its experiments.
Experiment Setup Yes Hyperparameters: In the deterministic experiments, we used a uniform step size η = 1 β for the Extra-Gradient method and η = 1 2β for the Past Extra-Gradient method, as suggested by the theoretical analysis (Hsieh et al. 2019). ... All of the hyperparameter searches picked the best value from the set {1, 5} 105, 104, . . . , 101, 1, 10 1, . . . , 10 4, 10 5 .