Complement Sparsification: Low-Overhead Model Pruning for Federated Learning
Authors: Xiaopeng Jiang, Cristian Borcea
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate CS experimentally with two popular FL benchmark datasets. |
| Researcher Affiliation | Academia | Department of Computer Science, New Jersey Institute of Technology, Newark, NJ, USA xj8@njit.edu, borcea@njit.edu |
| Pseudocode | Yes | Algorithm 1: Complement Sparsification Pseudo-code |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | CS is evaluated with two benchmark datasets in LEAF (Caldas et al. 2018): Twitter and FEMNIST. |
| Dataset Splits | No | The training dataset is constructed with 80% of data from each user, and the rest of the data are for testing. |
| Hardware Specification | Yes | The experiments are conducted on a Ubuntu Linux cluster (Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz with 512GB memory, 4 NVIDIA P100-SXM2 GPUs with 64GB total memory). |
| Software Dependencies | No | We implement CS with Flower (Beutel et al. 2020) and Tensorflow. Specific version numbers for these software dependencies are not provided. |
| Experiment Setup | Yes | Table 1 shows the training hyper-parameters for the two models. We set the aggregation ratio (η in equation 8) to 1.5 to avoid clients training outcomes being pruned away if they are too small. We set the server model sparsity to 50%, unless otherwise specified. |