Accelerated Doubly Stochastic Gradient Algorithm for Large-scale Empirical Risk Minimization
Authors: Zebang Shen, Hui Qian, Tongzhou Mu, Chao Zhang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies on huge scale datasets are conducted to illustrate the efficiency of our method in practice. In this section, we conduct several experiments to show the efficiency of ADSG on huge-scale real problems. |
| Researcher Affiliation | Academia | Zebang Shen, Hui Qian , Tongzhou Mu, and Chao Zhang College of Computer Science and Technology, Zhejiang University, China {shenzebang, qianhui, mutongzhou, zczju}@zju.edu.cn |
| Pseudocode | Yes | Algorithm 1 ADSG I Algorithm 2 ADSG II |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | Four large scale datasets from Lib SVM [Chang and Lin, 2011] are used: kdd2010raw, avazu-app, new20.binary, and url-combined. Their statistics are given in Table 2. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into training/validation/test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | All methods use the same mini batch size b. We use the default inner loop count described in the original paper for SVRG, MRBCD, and Katyusha. We tune the step size to give the best performance. |