FedSplit: an algorithmic framework for fast federated optimization

Authors: Reese Pathak, Martin J. Wainwright

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We complement our theory with some experiments that demonstrate the benefits of our methods in practice. and In this section, we present numerical results for Fed Spliton some convex federated optimization problem instances.
Researcher Affiliation Academia Reese Pathak Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720 pathakr@berkeley.edu Martin J. Wainwright Department of Electrical Engineering and Computer Sciences Department of Statistics University of California, Berkeley Berkeley, CA 94720 wainwrig@berkeley.edu
Pseudocode Yes Algorithm 1 [Fed Split] Splitting scheme for solving federated problems of the form (1)
Open Source Code No The paper mentions details in Section B of the supplement but does not provide a direct link to open-source code for the methodology described in the paper. There is no explicit statement of code release or repository URL.
Open Datasets Yes This dataset was proposed as a benchmark for federated optimization; there are N = 805, 263 images, m = 3, 550 clients, and K = 62 classes. The problem dimension is d = 6, 875; see Section B.2.2 in the supplement for additional details. referring to FEMNIST dataset in the LEAF framework [6]. [6] is S. Caldas, P. Wu, et al. LEAF: A benchmark for federated settings. abs/1812.01097, 2018.
Dataset Splits No The paper mentions the FEMNIST dataset but does not specify train/validation/test splits, percentages, or other details needed for data partitioning. It only states the total number of images, clients, and classes.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, memory specifications, or cloud instance types used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers for its implementation or experiments. While it references CVXPY, it does not state that it was used in their experimental setup with a version number.
Experiment Setup Yes We implement Fed Split with exact proximal operators and inexact implementations with a constant number of gradient steps e ∈ {1, 5, 10}. For comparison, we implemented a federated gradient method as previously described (4). and Given the large scale nature of this example, we implement an accelerated gradient method for the proximal updates, terminated when the gradient of the proximal objective drops below 10^-8.