New Bounds For Distributed Mean Estimation and Variance Reduction
Authors: Peter Davies, Vijaykrishna Gurunanthan, Niusha Moshrefi, Saleh Ashkboos, Dan Alistarh
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show experimentally that our method yields practical improvements for common applications, relative to prior approaches. |
| Researcher Affiliation | Collaboration | Peter Davies IST Austria peter.davies@ist.ac.atVijaykrishna Gurunathan IIT Bombay krishnavijay1999@gmail.comNiusha Moshrefi IST Austria niusha.moshrefi@ist.ac.atSaleh Ashkboos IST Austria saleh.ashkboos@ist.ac.atDan Alistarh IST Austria & Neural Magic dan.alistarh@ist.ac.at |
| Pseudocode | No | The paper describes algorithms in prose, such as "The simplest version of our lattice quantization algorithm can be described as follows", but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | No | The paper does not provide concrete access information (link, DOI, repository, or formal citation with authors/year) for a publicly available or open dataset. It mentions generating synthetic data and references other datasets in the full version without providing access details. |
| Dataset Splits | No | The paper does not specify exact dataset split percentages or absolute sample counts for training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or detailed computer specifications used for running its experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., library names like PyTorch 1.9, or specific solver versions). |
| Experiment Setup | Yes | Figure 1: Gradient quantization results for the regression example. S = 8192, n = 2 d = 100, batch_size = 4096... Figure 1 (right) Regression convergence: S = 8192, n = 2 d = 100, lr = 0.8, batch = 4096, qlevel = 8... Figure 2: Local SGD Convergence: S = 8192, n = 2 d = 100, lr = 0.1, batch = 4096, q = 8, rep = 10 |