A unified variance-reduced accelerated gradient method for convex optimization
Authors: Guanghui Lan, Zhize Li, Yi Zhou
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide extensive experimental results to demonstrate the advantages of Varag over several state-of-the-art methods for solving some well-known ML models, e.g., logistic regression, Lasso, etc. We defer the proofs of the main results in Appendix A. |
| Researcher Affiliation | Collaboration | Guanghui Lan H. Milton Stewart School of Industrial & Systems Engineering Georgia Institute of Technology Atlanta, GA 30332 george.lan@isye.gatech.edu Zhize Li Institute for Interdisciplinary Information Sciences Tsinghua University Beijing 100084, China zz-li14@mails.tsinghua.edu.cn Yi Zhou IBM Almaden Research Center San Jose, CA 95120 yi.zhou@ibm.com |
| Pseudocode | Yes | Algorithm 1 The variance-reduced accelerated gradient (Varag ) method. Algorithm 2 Stochastic accelerated variance-reduced stochastic gradient descent (Stochastic Varag ) |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | For all experiments, we use public real datasets downloaded from UCI Machine Learning Repository [10] and uniform sampling strategy to select fi. Diabetes (m = 1151), Breast Cancer Wisconsin (m = 683), Parkinsons Telemonitoring (m = 5875) |
| Dataset Splits | No | The paper does not provide specific details on dataset split percentages, counts, or explicit methodology for training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., Python 3.8, PyTorch 1.9, CPLEX 12.4). |
| Experiment Setup | Yes | The algorithmic parameters for SVRG++ and Katyushans are set according to [2] and [1], respectively, and those for Varag are set as in Theorem 1. The algorithmic parameters for SVRG++ and Katyushans are set according to [2] and [1], respectively, and those for Varag are set as in Theorem 2. The algorithmic parameters for FGM and Varag are set according to [21] and Theorem 3, respectively. |