A Guide Through the Zoo of Biased SGD
Authors: Yury Demidovich, Grigory Malinovsky, Igor Sokolov, Peter Richtarik
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we demonstrate the effectiveness of our framework through experimental results that validate our theoretical findings. ... The primary goal of these numerical experiments is to demonstrate the alignment of our theoretical findings with the observed experimental results. |
| Researcher Affiliation | Academia | Yury Demidovich AI Initiative, KAUST yury.demidovich@kaust.edu.sa Grigory Malinovsky AI Initiative, KAUST grigorii.malinovskii@kaust.edu.sa Igor Sokolov AI Initiative, KAUST igor.sokolov.1@kaust.edu.sa Peter Richtárik AI Initiative, KAUST peter.richtarik@kaust.edu.sa |
| Pseudocode | Yes | Algorithm 1 Biased Stochastic Gradient Descent (Biased SGD) |
| Open Source Code | No | The paper does not provide any explicit statements or links to open-source code for the methodology it describes. |
| Open Datasets | Yes | The experiments utilized publicly available Lib SVM datasets Chang and Lin [2011], specifically the splice, a9a, and w8a. ... We use datasets from the open Lib SVM library [Chang and Lin, 2011]. |
| Dataset Splits | No | The paper mentions using 'training data samples' from publicly available datasets but does not specify the exact percentages or counts for training, validation, or test splits. There is no explicit description of dataset partitioning for reproduction. |
| Hardware Specification | Yes | executed on a machine equipped with 48 cores of Intel(R) Xeon(R) Gold 6246 CPU @ 3.30GHz. |
| Software Dependencies | Yes | These algorithms were developed using Python 3.8 |
| Experiment Setup | Yes | In all experiments, we set the regularization parameter λ to a fixed value of λ = 1. ... The algorithms are terminated after completing 5000 iterations. ... Specifically, for Biased SGD-ind, the stepsize is determined according to Corollary 4 and Claim 2 with γ = min n 1 LAK , b LB , c LC o , where c = 0, A = maxi Li B = 0, C = 2A + s2, b = mini pi and s = 0. |