Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning
Authors: Grigory Malinovsky, Kai Yi, Peter Richtarik
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We characterize this threshold theoretically, and confirm our theoretical predictions with empirical results. We conduct several experiments on the w8a dataset from Lib SVM library [Chang and Lin, 2011]. |
| Researcher Affiliation | Academia | Grigory Malinovsky KAUST grigory.malinovsky@kaust.edu.sa Kai Yi KAUST kai.yi@kaust.edu.sa Peter Richtárik KAUST peter.richtarik@kaust.edu.sa |
| Pseudocode | Yes | Algorithm 1 Prox Skip-VR |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for the proposed method or a link to a code repository. |
| Open Datasets | Yes | We conduct several experiments on the w8a dataset from Lib SVM library [Chang and Lin, 2011]. |
| Dataset Splits | No | The paper mentions using the w8a dataset for experiments but does not explicitly provide details about training, validation, or test dataset splits (e.g., percentages, sample counts, or specific methods for splitting). |
| Hardware Specification | No | The paper does not specify any particular hardware (GPU models, CPU types, memory, etc.) used for running the experiments. |
| Software Dependencies | No | The paper mentions the 'Lib SVM library' but does not provide specific version numbers for it or any other software dependencies. |
| Experiment Setup | Yes | We set the regularization parameter λ = 5 · 10−4L by default, where L is the smoothness constant of f. We choose ni = m = n/M for all i. for three choices of mini-batch sizes (τ = 16, 32, 64) |