Communication Complexity of Distributed Convex Learning and Optimization
Authors: Yossi Arjevani, Ohad Shamir
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we take the opposite direction, and study what are the fundamental performance limitations in solving Eq. (1), under several different sets of assumptions. We identify cases where existing algorithms are already optimal (at least in the worst-case), as well as cases where room for further improvement is still possible. |
| Researcher Affiliation | Academia | Yossi Arjevani Weizmann Institute of Science Rehovot 7610001, Israel yossi.arjevani@weizmann.ac.il; Ohad Shamir Weizmann Institute of Science Rehovot 7610001, Israel ohad.shamir@weizmann.ac.il |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper is theoretical and focuses on lower bounds; it does not present a new method with associated open-source code. |
| Open Datasets | No | The paper is theoretical and focuses on lower bounds for distributed optimization; it does not use or describe any specific datasets for training or experimentation. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments or discuss dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not conduct experiments that would require specific hardware specifications. |
| Software Dependencies | No | The paper is theoretical and does not describe any software dependencies or versions for experimental reproduction. |
| Experiment Setup | No | The paper is theoretical and focuses on lower bounds, it does not describe an experimental setup, hyperparameters, or training configurations. |