Almost Tune-Free Variance Reduction

Authors: Bingcong Li, Lingda Wang, Georgios B. Giannakis

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical tests corroborate the proposed methods.
Researcher Affiliation Academia 1Uiversity of Minnesota, MN, USA. 2University of Illinois at Urbana-Champaign, IL, USA.
Pseudocode Yes Algorithm 1 SVRG; Algorithm 2 SARAH
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes To assess performance, the proposed tune-free BB-SVRG and BB-SARAH are applied to binary classification via regularized logistic regression (cf. (6)) using the datasets a9a, rcv1.binary, and real-sim from LIBSVM. Online available at https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html.
Dataset Splits No The paper mentions using datasets a9a, rcv1.binary, and real-sim, and states that "Details regarding the datasets, the µ values used, and implementation details are deferred to Appendix D.2." However, Appendix D.2 (and the main text) does not provide specific training/test/validation dataset splits (e.g., percentages or sample counts).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes For SVRG and SARAH, we fix m = 5κ, and tune for the best step sizes. For BB-SVRG, we choose ηs and ms as (9) with θκ = 4κ (as in Proposition 1) and c = 1. While we choose θκ = κ (as in Proposition 2) and c = 1 for BB-SARAH. W-Avg is adopted for both BB-SVRG and BB-SARAH.