Breaking the Span Assumption Yields Fast Finite-Sum Minimization

Authors: Robert Hannah, Yanli Liu, Daniel O'Connor, Wotao Yin

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we compare the performance of SVRG, and SARAH to SAGA to verify our conclusions. We solve the regularized least squares problem minimize 1/2n ||Ax - b||^2_2 + lambda/2 ||x||^2_2. ... Figure 5.1: Comparison of SAGA, SVRG, and SARAH for various values of the condition number kappa.
Researcher Affiliation Academia Robert Hannah 1, Yanli Liu 1, Daniel O Connor 2, and Wotao Yin 1 1Department of Mathematics, University of California, Los Angeles 2Department of Mathematics, University of San Francisco
Pseudocode Yes Algorithm 1 Prox-SVRG(F, x0, eta, m)
Open Source Code No The paper does not provide any statements about releasing open-source code or links to a code repository.
Open Datasets No The matrix A and vector b are generated randomly with entries uniformly distributed between 0 and 1.
Dataset Splits No The paper describes generating random data but does not specify any training, validation, or test dataset splits, or cross-validation methods.
Hardware Specification No The paper does not provide any specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers used for the experiments.
Experiment Setup Yes The parameter lambda is chosen to control the condition number kappa = L/mu of the problem. ... In order to provide a fair comparison, step sizes were tuned individually for each algorithm and each problem instance.