Fast Variance Reduction Method with Stochastic Batch Size
Authors: Xuanqing Liu, Cho-Jui Hsieh
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our algorithm outperforms SAGA and other existing batched and stochastic solvers on real datasets. In addition, we also conduct a precise analysis to compare different update rules for variance reduction methods, showing that SAGA++ converges faster than SVRG in theory. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of California, Davis, California, USA 2Department of Statistic, University of California, Davis, California, USA. |
| Pseudocode | Yes | Algorithm 1 Variance Reduction Method with Stochastic Batch Size |
| Open Source Code | No | The paper does not provide a direct link or explicit statement about the release of source code for the methodology described in the paper. It mentions that 'All the algorithms are implemented based on the LIBLINEAR code base', but this refers to a third-party tool. |
| Open Datasets | Yes | All the datasets can be downloaded from LIBSVM website. Download from https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages, sample counts for training, validation, and test sets) to reproduce the data partitioning. It mentions using datasets from the LIBSVM website but not how they were split for experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments. It only vaguely mentions 'the computing resources provided by Google cloud and Nvidia'. |
| Software Dependencies | No | The paper states, 'All the algorithms are implemented based on the LIBLINEAR code base', but it does not specify a version number for LIBLINEAR or any other software dependencies. |
| Experiment Setup | Yes | For each outer iteration in SVRG/SAGA++ we choose m = 1.5n inner iterations... The lazy update for ℓ1 regularization is also implemented for all the variance reduced methods. ...with different regularization parameters. Indeed, λ = 10^-6 (the middle figure) is the best parameter... |