DRIVE: One-bit Distributed Mean Estimation
Authors: Shay Vargaftik, Ran Ben-Basat, Amit Portnoy, Gal Mendelson, Yaniv Ben-Itzhak, Michael Mitzenmacher
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our methods on a collection of distributed and federated learning tasks, using a variety of datasets, and show a consistent improvement over the state of the art. |
| Researcher Affiliation | Collaboration | Shay Vargaftik VMware Research shayv@vmware.com; Ran Ben Basat University College London r.benbasat@cs.ucl.ac.uk; Amit Portnoy Ben-Gurion University amitport@post.bgu.ac.il; Gal Mendelson Stanford University galmen@stanford.edu; Yaniv Ben-Itzhak VMware Research ybenitzhak@vmware.com; Michael Mitzenmacher Harvard University michaelm@eecs.harvard.edu |
| Pseudocode | Yes | The pseudocode of DRIVE appears in Algorithm 1. |
| Open Source Code | Yes | All the results presented in this paper are fully reproducible by our source code, available at [29]. |
| Open Datasets | Yes | We use MNIST [51, 52], EMNIST [53], CIFAR-10 and CIFAR-100 [54] for image classification tasks; a next-character-prediction task using the Shakespeare dataset [55]; and a next-word-prediction task using the Stack Overflow dataset [56]. |
| Dataset Splits | Yes | Detailed configuration information and additional results appear in Appendix E. We use code, client partitioning, models, hyperparameters, and validation metrics from the federated learning benchmark of [62]. |
| Hardware Specification | Yes | Table 1: Empirical NMSE and average per-vector encoding time (in milliseconds, on an RTX 3090 GPU)...; ...using NVIDIA Ge Force GTX 1060 (6GB) GPU... |
| Software Dependencies | No | All the distributed tasks are implemented over Py Torch [45] and all the federated tasks are implemented over Tensor Flow Federated [46]. (Specific version numbers for PyTorch or TensorFlow Federated are not provided). |
| Experiment Setup | Yes | Detailed configuration information and additional results appear in Appendix E. We use code, client partitioning, models, hyperparameters, and validation metrics from the federated learning benchmark of [62]. |