Heterogeneity for the Win: One-Shot Federated Clustering
Authors: Don Kurian Dennis, Tian Li, Virginia Smith
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We motivate our analysis with experiments on common FL benchmarks, and highlight the practical utility of one-shot clustering through usecases in personalized FL and device sampling. |
| Researcher Affiliation | Academia | 1Carnegie Mellon University, Pittsburgh, PA, USA. Correspondence to: Don Dennis <dondennis@cmu.edu>. |
| Pseudocode | Yes | Algorithm 1 Local kpzq-means (Awasthi & Sheffet, 2012) and Algorithm 2 k-FED are provided with structured steps. |
| Open Source Code | Yes | Implementation of k-FED and experimental setup details can be found at: http://github.com/metastable B/kfed/. |
| Open Datasets | Yes | We perform this experiment on the FEMNIST and Shakespeare datasets (Caldas et al., 2018) (see Appendix B for details). ... We use the MNIST dataset for this experiment. |
| Dataset Splits | No | The paper mentions using datasets like FEMNIST, Shakespeare, and MNIST, but it does not specify explicit train/validation/test split percentages, sample counts, or reference predefined standard splits in detail for reproducibility. |
| Hardware Specification | No | The paper describes the context of federated learning involving 'mobile phones or wearables' but does not provide any specific hardware details such as GPU/CPU models, memory, or cloud computing specifications used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper states 'Implementation of k-FED and experimental setup details can be found at: http://github.com/metastable B/kfed/' (Section 4), indicating these details are external. The main text itself does not contain specific hyperparameters, optimizer settings, or other detailed system-level training configurations. |