Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

From Low Probability to High Confidence in Stochastic Convex Optimization

Authors: Damek Davis, Dmitriy Drusvyatskiy, Lin Xiao, Junyu Zhang

JMLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on light-tail noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called prox Boost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both streaming (online) algorithms and offline algorithms based on empirical risk minimization.
Researcher Affiliation Collaboration Damek Davis EMAIL School of Operations Research and Information Engineering Cornell University Ithaca, NY 14850, USA Dmitriy Drusvyatskiy EMAIL Department of Mathematics, University of Washington Seattle, WA 98195, USA Lin Xiao EMAIL Facebook AI Research Seattle, WA 98019, USA Junyu Zhang EMAIL Department of Electrical Engineering, Princeton University Princeton, NJ 08544, USA
Pseudocode Yes Algorithm 1: Robust Distance Estimation D(ε, m) Algorithm 2: prox Boost(δ, p, T) Algorithm 3: ERM(n, λ, x) Algorithm 4: ERM-R(n, m, λ, x) Algorithm 5: Boost ERM(γ, T, m) Algorithm 6: Alg-R(δ, λ, , x, m) Algorithm 7: Boost Alg(δ, in, xin, T, m) Algorithm 8: Extract({yi}m i=1, ρ) Algorithm 9: Robust Gap(Mf( ), m, ϵ) Algorithm 10: Boost ERMC(δ, T, m) Algorithm 11: Boost Alg C(δ, in, xin, T, m)
Open Source Code No The paper does not provide any explicit statements about releasing source code, nor does it include any links to code repositories.
Open Datasets No The paper does not provide concrete access information (links, DOIs, formal citations) for any specific datasets used for experimental validation. It discusses theoretical applications to 'empirical risk minimization' using generic 'i.i.d. samples z1, . . . , zn P' and mentions examples like 'ridge regression', but does not use or provide access to named datasets.
Dataset Splits No The paper is theoretical and focuses on algorithm design and proofs. It does not describe any specific experiments that would require dataset splits.
Hardware Specification No The paper is theoretical and focuses on algorithm design and proofs. It does not describe any specific hardware used for running experiments.
Software Dependencies No The paper is theoretical and focuses on algorithm design and proofs. It does not specify any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design, proofs, and analyses of sample complexity. It does not describe specific experimental setups with hyperparameters, training configurations, or system-level settings.