Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm

Authors: Deanna Needell, Rachel Ward, Nati Srebro

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove improved convergence results for generic SGD, as well as for SGD with gradient estimates chosen based on a weighted sampling distribution, highlighting the role of importance sampling in SGD: We first show that without perturbing the sampling distribution, we can obtain a linear dependence on the uniform conditioning (sup Li/µ), but it is not possible to obtain a linear dependence on the average conditioning E[Li]/µ. This is a quadratic improvement over [1] in regimes where the components have similar Lipschitz constants (Theorem 2.1 in Section 2). We then show that with weighted sampling we can obtain a linear dependence on the average conditioning E[Li]/µ, dominating the quadratic dependence of [1] (Corollary 3.1 in Section 3). In Section 4, we show how also for smooth but not-strongly-convex objectives, importance sampling can improve a dependence on a uniform bound over smoothness, (sup Li), to a dependence on the average smoothness E[Li] such an improvement is not possible without importance sampling. For non-smooth objectives, we show that importance sampling can eliminate a dependence on the variance in the Lipschitz constants of the components. Finally, in Section 5, we turn to the Kaczmarz algorithm, and show we can improve known guarantees in this context as well. ... Figures 1 and 2 present experimental results on synthetic problems.
Researcher Affiliation Academia Deanna Needell Department of Mathematical Sciences Claremont Mc Kenna College Claremont CA 91711 dneedell@cmc.edu Nathan Srebro Toyota Technological Institute at Chicago and Dept. of Computer Science, Technion nati@ttic.edu Rachel Ward Department of Mathematics Univ. of Texas, Austin rward@math.utexas.edu
Pseudocode No The paper contains mathematical equations describing the SGD updates (e.g., (2.1), (3.3)), but it does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described.
Open Datasets No The paper describes experiments on "synthetic overdetermined least squares problems" and "synthetic underdetermined least squares problems" with generated data characteristics, but it does not provide concrete access information (link, DOI, repository, or formal citation) for a publicly available or open dataset.
Dataset Splits No The paper describes experiments on synthetic data but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No For the experimental figures, the paper states "the stepsize was chosen as in (3.10)". While this refers to a formula, it does not provide concrete numerical values for hyperparameters or other training configurations (e.g., batch size, number of epochs, optimizer settings) for the synthetic problems used in the experiments.