Optimal Algorithms for Non-Smooth Distributed Optimization in Networks
Authors: Kevin Scaman, Francis Bach, Sebastien Bubeck, Laurent Massoulié, Yin Tat Lee
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a d1/4 multiplicative factor of the optimal convergence rate, where d is the underlying dimension. |
| Researcher Affiliation | Collaboration | 1 Huawei Noah s Ark Lab, 2 INRIA, Ecole Normale Supérieure, PSL Research University, 3 Microsoft Research, 4 University of Washington, 5 MSR-INRIA Joint Centre |
| Pseudocode | Yes | Algorithm 1 distributed randomized smoothing, Algorithm 2 multi-step primal-dual algorithm |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | No | The paper is theoretical and focuses on algorithm design and convergence rate analysis for abstract convex functions. It does not mention or use any specific dataset for training or evaluation, nor does it provide information about dataset availability. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with datasets. Therefore, no dataset split information (training, validation, test) is provided. |
| Hardware Specification | No | The paper is theoretical and does not report on empirical experiments requiring specific hardware. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not report on empirical experiments requiring specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe empirical experimental setups with hyperparameters or training configurations. |