Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Constrained Online Convex Optimization with Polyak Feasibility Steps

Authors: Spencer Hutchinson, Mahnoosh Alizadeh

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Although our primary contribution is our theoretical results, we also give numerical experiments to demonstrate the functionality of the algorithm and provide some empirical verification of the theoretical results.6 In these experiments, we benchmark the performance of our algorithm with the algorithm from Yu et al. (2017) as it has the best regret bound among those in Table 1.
Researcher Affiliation Academia 1Department of Electrical and Computer Engineering, University of California-Santa Barbara, Santa Barbara, California, USA. Correspondence to: Spencer Hutchinson <EMAIL>, Mahnoosh Alizadeh <EMAIL>.
Pseudocode Yes Algorithm 1 OGD with Polyak Feasibility Steps
Open Source Code Yes Our experiment code is available at https://github.com/shutch1/OCO-Polyak-Feasibility-Steps.
Open Datasets No We consider a 2-dimensional toy setting with quadratic cost functions ft(x) = 3 x vt 2 and linear constraints Ax b. We generate vt by sampling uniformly from [0, 1]2, and take A = [I I] and b = 0.51, where we use I to denote the 2 2 identity matrix.
Dataset Splits No The paper describes generating a synthetic dataset but does not specify any training, validation, or test splits. The experiments run for a specified 'Horizon T'.
Hardware Specification No The paper describes a '2-dimensional toy setting' and does not mention any specific hardware used for running the experiments.
Software Dependencies No The paper does not mention any specific software dependencies or their version numbers.
Experiment Setup Yes In this setting, we implement Algorithm 1 (labeled PFS) with the algorithm parameters chosen according to Theorem 1, as well as the algorithm in Yu et al. (2017) (labeled DPP) with the algorithm parameters chosen according to their Theorem 1, i.e. α = T, V = T. We also implement the algorithm from Yu et al. (2017) with a tightened constraint gρ(x) := g(x) + ρ for ρ [0, ϵ], as is used in our algorithm and that in Mahdavi et al. (2012) and Jenatton et al. (2016). Same as the aforementioned algorithms, we use a decreasing ρ = min(ϵ, c T ), where c > 0 is a parameter that we tune to reduce the violation. We choose c = 20 to ensure that there is a small amount of constraint violation.