A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates
Authors: Kaiwen Zhou, Fanhua Shang, James Cheng
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experiments for various machine learning problems such as logistic regression are given to illustrate the practical improvement in both serial and asynchronous settings. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong 2School of Artificial Intelligence, Xidian University, China. Correspondence to: Fanhua Shang <fhshang@xidian.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 Mi G |
| Open Source Code | No | The paper states that all algorithms were implemented in C++ and executed through a MATLAB interface, but it does not provide any specific link or statement about open-sourcing the code for its methodology. |
| Open Datasets | Yes | We measure the performance on the two sparse datasets listed in Table 4. Table 4. Summary of the two sparse data sets. Dataset # Data # Features Density RCV1 697,641 47,236 1.5 10^-3 KDD2010 19,264,097 1,163,024 10^-6 |
| Dataset Splits | No | The paper does not explicitly provide details about training, validation, or test dataset splits (e.g., exact percentages or sample counts), nor does it reference predefined splits with specific citations that include author and year. |
| Hardware Specification | No | The paper states that 'All the algorithms were implemented in C++ and executed through MATLAB interface', but it does not provide any specific hardware details such as GPU models, CPU specifications, or memory amounts used for the experiments. |
| Software Dependencies | No | The paper states that 'All the algorithms were implemented in C++ and executed through MATLAB interface', but it does not provide specific version numbers for these software components or any other libraries or solvers used. |
| Experiment Setup | Yes | For the case of m < 4n (see the first row in Figure 1), we set the parameters for Mi G and Katyusha with their theoretical suggestions (e.g., θ =τ1 =p m/3κ). For fair comparison, we set the learning rate η = 1/4L for SVRG and Acc-Prox SVRG, which is theoretically reasonable. ... For the case of m > 4n (see the second row in Figure 1), we tuned all the parameters in Table 3. |