Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Generalized Smooth Stochastic Variational Inequalities: Almost Sure Convergence and Convergence Rates

Authors: Daniil Vankov, Angelia Nedich, Lalitha Sankar

TMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also provide the first in-expectation unbiased convergence rate results for these methods under a relaxed smoothness assumption for α 1. Finally, we present numerical experiments where we compare the performance of the methods with proposed stochastic clipping for different stepsize parameter q > 1/2 and quasi-sharpness parameter p.
Researcher Affiliation Academia Daniil Vankov EMAIL Arizona State University Angelia Nedić EMAIL Arizona State University Lalitha Sankar EMAIL Arizona State University
Pseudocode No The paper describes the methods using mathematical equations: "Stochastic projection method: uk+1 = PU(uk γkΦ(uk, ξk)), (7)" and "Stochastic Korpelevich method: uk = PU(hk γkΦ(hk, ξ1 k)), hk+1 = PU(hk γkΦ(uk, ξ2 k)), (8)". These are not formatted as pseudocode blocks or algorithms.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide a link to a code repository for the methodology described.
Open Datasets No We consider an unconstrained minmax game: min u1 max u2 1 p u1 p + u1, u2 1 with p > 1, and u1 Rd, u2 Rd. Then, the corresponding operator F : R2d R2d is defined by F(u) = u1 p 2u1 + u2 u2 p 2u2 u1 We assume that we have an access only to a noise evaluation of the corresponding operator and aim to solve unconstrained SVI(R2d, F) with the following stochastic operator Φ(u, ξ) = F(u) + ξ, where ξ is a random vector with independent zero-mean Gaussian entries and with variance σ2 = 1. Then, F(u) = E[Φ(u, ξ)] is an α-symmetric and p-quasi sharp operator due to Vankov et al. (2024). We set these parameters to be {(α 0.33, p = 2.5), (α 0.5, p = 3.0), (α 0.8, p = 6.0)}.
Dataset Splits No The paper describes numerical experiments on a synthetically generated minmax game. This type of experiment does not typically involve training/test/validation dataset splits, and no such splits are mentioned in the text.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper does not specify any software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation.
Experiment Setup Yes In Figure 1, we plot an average distance to solution from the current iterate over twenty runs to the solution set as a function of the number of iterations. In particular, the stepsizes for clipped stochastic projection and Korpelevich methods are chosen according to Theorems 3.4 and 4.4, respectively, with βk = 100/(100 + k1/2+ϵ) for q = 1/2 + ϵ with ϵ > 0. We also set βk = 100/(100 + k1 ϵ) for stochastic clipped Popov method and the stochastic clipped projection method using the same sample Φ(uk, ξk) for clipping. We set these parameters to be {(α 0.33, p = 2.5), (α 0.5, p = 3.0), (α 0.8, p = 6.0)}. We assume that we have an access only to a noise evaluation of the corresponding operator and aim to solve unconstrained SVI(R2d, F) with the following stochastic operator Φ(u, ξ) = F(u) + ξ, where ξ is a random vector with independent zero-mean Gaussian entries and with variance σ2 = 1.