A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
Authors: Samuel Horváth, Peter Richtarik
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform several numerical experiments which validate our theoretical findings. Finally, we provide an experimental evaluation on an array of classification tasks with CIFAR10 dataset corroborating our theoretical findings. |
| Researcher Affiliation | Academia | Samuel Horváth and Peter Richtárik KAUST Thuwal, Saudi Arabia {samuel.horvath, peter.richtarik}@kaust.edu.sa |
| Pseudocode | Yes | Algorithm 1 DCSGD; Algorithm 2 DCSGD with Error Feedback |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | We do an evaluation on CIFAR10 dataset. |
| Dataset Splits | Yes | A validation accuracy is computed based on 10 % randomly selected training data. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | Our experimental results are based on a Python implementation of all the methods running in Py Torch. (No version numbers provided for Python or PyTorch). |
| Experiment Setup | Yes | We used a local batch size of 32. For every experiment, we randomly distributed the training dataset among 8 workers; each worker computes its local gradient-based on its own dataset. We consider VGG11 (Simonyan & Zisserman, 2015) and Res Net18 (He et al., 2016) models and step-sizes 0.1, 0.05 and 0.01. |