Conditional Accelerated Lazy Stochastic Gradient Descent

Authors: Guanghui Lan, Sebastian Pokutta, Yi Zhou, Daniel Zink

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate practical speed ups of CALSGD through preliminary numerical experiments for the video co-localization problem, the structured regression problem and quadratic optimization over the standard spectrahedron; an extensive study is beyond the scope of this paper and left for future work. In all cases we report a substantial improvements in performance.
Researcher Affiliation Academia 1ISy E, Georgia Institute of Technology, Atlanta, GA.
Pseudocode Yes Algorithm 1 Conditional Accelerated Lazy Stochastic Gradient Descent (CALSGD)
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology described is openly available.
Open Datasets No The paper mentions problem instances like 'video co-localization problem' and 'structured regression problem' but does not provide specific access information (URL, DOI, or formal citation with authors/year) for a publicly available dataset.
Dataset Splits No The paper does not provide specific training/test/validation dataset splits (e.g., percentages, sample counts, or references to predefined splits).
Hardware Specification No The paper mentions 'wall clock time' in its experimental results, but it does not explicitly describe any specific hardware (GPU models, CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper states 'we used Gurobi as a solver' but does not provide a specific version number for this software dependency as used in their experiments.
Experiment Setup Yes For comparability we use a batch size of 128 for all algorithms to compute each gradient and the full matrix A for the actual objective function values.