New Insight into Hybrid Stochastic Gradient Descent: Beyond With-Replacement Sampling and Convexity
Authors: Pan Zhou, Xiaotong Yuan, Jiashi Feng
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive numerical results confirm our theoretical affirmation and demonstrate the favorable efficiency of Wo RS-based HSGD. |
| Researcher Affiliation | Academia | Learning & Vision Lab, National University of Singapore, Singapore B-DAT Lab, Nanjing University of Information Science & Technology, Nanjing, China pzhou@u.nus.edu xtyuan@nuist.edu.cn elefjia@nus.edu.sg |
| Pseudocode | Yes | Algorithm 1 Hybrid SGD under Wo RS |
| Open Source Code | No | The paper does not provide any statement about releasing source code or links to a code repository. |
| Open Datasets | Yes | All the datasets are public datasets from LibSVM, which can be downloaded from https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ |
| Dataset Splits | No | The paper does not explicitly specify exact percentages or sample counts for training, validation, and test splits, nor does it refer to predefined splits with citations for reproducibility. |
| Hardware Specification | No | The paper does not specify any hardware used for running the experiments, such as CPU or GPU models, or cloud computing environments with specifications. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., libraries, frameworks, programming language versions) used for the experiments. |
| Experiment Setup | No | While the paper mentions 'Hyper-parameters of all the algorithms are tuned to best' and discusses learning rates and mini-batch size strategies in theory, it does not explicitly provide the specific hyperparameter values or detailed training configurations used in the experiments. |