Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization
Authors: Chaobing Song, Yong Jiang, Yi Ma
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments on real datasets, we show the good performance of VRADA over existing methods for large-scale machine learning problems. |
| Researcher Affiliation | Academia | Tsinghua-Berkeley Shenzhen Institute, Tsinghua University songcb16@mails.tsinghua.edu.cn, jiangy@sz.tsinghua.edu.cn Department of EECS, University of California, Berkeley yima@eecs.berkeley.edu |
| Pseudocode | Yes | Algorithm 1 Variance Reduction via Accelerated Dual Averaging (VRADA) |
| Open Source Code | No | No explicit statement or link providing concrete access to the source code for the methodology described in this paper was found. |
| Open Datasets | Yes | The datasets we use are a9a and covtype, downloaded from the Lib SVM website10. The dataset url is https://www.csie.ntu.edu.cn/~cjlin/libsvmtools/datasets/. |
| Dataset Splits | No | No explicit information regarding training/validation/test dataset splits (e.g., percentages, sample counts, or specific cross-validation setup) was found. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory specifications, or cloud instances) used for running the experiments were provided. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., library names like PyTorch 1.9 or specific solver versions) were provided in the paper. |
| Experiment Setup | Yes | The problem we study is the 2-norm regularized logistic regression problem with regularization parameter λ ∈ {0, 10−8, 10−4}. All four algorithms we compare have a similar outer-inner structure, where we set all the number of iterations as m = 2n. |