Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques

Authors: Bokun Wang, Mher Safaryan, Peter Richtarik

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we provide extensive numerical evidence with convex optimization problems that our smoothness-aware quantization strategies outperform existing quantization schemes as well as the aforementioned smoothness-aware sparsification strategies with respect to three evaluation metrics: the number of iterations, the total amount of bits communicated, and wall-clock time.
Researcher Affiliation Academia Bokun Wang Texas A&M University, United States bokunw.wang@gmail.com Mher Safaryan KAUST, Saudi Arabia mher.safaryan.1@kaust.edu.sa Peter Richtárik KAUST, Saudi Arabia peter.richtarik@kaust.edu.sa
Pseudocode Yes Algorithm 1 DCGD+ WITH ARBITRARY UNBIASED COMPRESSION and Algorithm 2 DIANA+ WITH ARBITRARY UNBIASED COMPRESSION
Open Source Code No The paper does not provide an explicit statement or link for the source code of the described methodology.
Open Datasets Yes We conduct a range of experiments with several datasets from the Lib SVM repository [Chang and Lin, 2011]
Dataset Splits No The paper mentions using datasets but does not explicitly provide details about training, validation, or test splits.
Hardware Specification Yes The experiments are performed on a workstation with Intel(R) Xeon(R) Gold 6246 CPU @ 3.30GHz cores.
Software Dependencies No The paper mentions the 'MPI4PY library [Dalcín et al., 2005]' but does not provide specific version numbers for software dependencies.
Experiment Setup No The paper provides details on the problem formulation and data allocation, and mentions running experiments with 5 random seeds, but does not explicitly state specific training hyperparameters like learning rates, batch sizes, or regularization coefficients.