Clustered Federated Learning via Gradient-based Partitioning

Authors: Heasung Kim, Hyeji Kim, Gustavo De Veciana

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present a detailed analysis of the algorithm along with an evaluation on several CFL benchmarks demonstrating that it outperforms existing approaches in terms of convergence speed, clustering accuracy, and task performance. Evaluation: We demonstrate our approach s real-world effectiveness via extensive experiments.
Researcher Affiliation Academia 1Department of Electrical and Computer Engineering, The University of Texas at Austin, TX, USA. Correspondence to: Heasung Kim <heasung.kim@utexas.edu>.
Pseudocode Yes Algorithm 1 CFL-GP Input: K initial models {θ(0) k }K k=1, K initial clusters {S(0) k }K k=1, clustering period P, learning rate γ(t) k , gradient features initialized as g( 1) c = 0 c [C], feature moving average factor {βt}T t=1, and the number of cluster updates Tcl Output: K trained models {θ(T ) k }K k=1 and K updated clusters {S(T ) k }K k=1
Open Source Code Yes Source code: https://github.com/Heasung-Kim/clustered-federated-learning-via-gradient-based-partitioning
Open Datasets Yes We employ the MNIST dataset (Le Cun et al., 1998)
Dataset Splits Yes The 70% of each local dataset is used as a training dataset for the corresponding client, and the remaining 30% is used as a test dataset.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running its experiments. It discusses computational costs but not the underlying hardware.
Software Dependencies No The paper mentions software components like PyTorch (implicitly via the code repository), but does not provide specific version numbers for any software dependencies required to replicate the experiments.
Experiment Setup Yes For all the algorithms, we have a default setup as b = 64, C = 32, and we set the learning rate to 0.1, P = 2, and T = 200.