Distributed Learning with Sublinear Communication
Authors: Jayadev Acharya, Chris De Sa, Dylan Foster, Karthik Sridharan
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main result is that by slightly relaxing the standard boundedness assumptions for linear models, we can obtain distributed algorithms that enjoy optimal error with communication logarithmic in dimension. This result is based on a family of algorithms that combine mirror descent with randomized sparsification/quantization of iterates, and extends to the general stochastic convex optimization model. (Abstract) |
| Researcher Affiliation | Academia | 1Cornell University 2Massachusetts Institute of Technology. Correspondence to: Dylan Foster <dylanf@mit.edu>. |
| Pseudocode | Yes | Algorithm 1 (Maurey Sparsification). Input: Weight vector w Rd. Sparsity level s. ... Algorithm 2 (Sparsified Mirror Descent). Input: Constraint set W with w 1 B1. Gradient norm parameter q [2, ). Gradient q norm bound Rq. Learning rate , Initial point w, Sparsity s,s0 N. |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and focuses on algorithm design and proofs; it does not describe empirical experiments involving specific datasets for training. |
| Dataset Splits | No | The paper is theoretical and focuses on algorithm design and proofs; it does not discuss training, validation, or test dataset splits for empirical evaluation. |
| Hardware Specification | No | The paper is theoretical and does not report on empirical experiments, therefore no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not describe a software implementation or dependencies with specific version numbers. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and mathematical analysis, not on empirical experimental setups, hyperparameters, or training configurations. |