Anchor Sampling for Federated Learning with Partial Client Participation

Authors: Feijie Wu, Song Guo, Zhihao Qu, Shiqi He, Ziming Liu, Jing Gao

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical studies on real-world datasets validate the effectiveness of Fed AMD and demonstrate the superiority of the proposed algorithm: Not only does it considerably save computation and communication costs, but also the test accuracy significantly improves.
Researcher Affiliation Academia 1Purdue University, West Lafayette, IN, USA 2The Hong Kong Polytechnic University, Hong Kong 3Hohai University, Nanjing, China 4The University of British Columbia, Vancouver, Canada.
Pseudocode Yes The pseudo-code is illustrated in Algorithm 1.
Open Source Code Yes Our code is available at https://github.com/Harli Wu/Fed AMD.
Open Datasets Yes We train a convolutional neural network Le Net-5 (Le Cun et al., 2015; 1989) using Fashion MNIST (Xiao et al., 2017) and 2-layer MLP on EMNIST digits (Cohen et al., 2017).
Dataset Splits No The paper describes partitioning data for clients and mentions using a "test dataset" for accuracy. However, it does not specify a distinct validation set split (e.g., percentages or counts for training, validation, and test sets).
Hardware Specification Yes Our simulation experiment runs on Ubuntu 18.04 with Intel(R) Xeon(R) Gold 6254 CPU, NVIDIA RTX A6000 GPU, and CUDA 11.2.
Software Dependencies Yes Our code is implemented using Python and Py Torch v.1.12.1.
Experiment Setup Yes The default setting in this section is K = 10, b = 64 and b is the size of the local training set. For different experiments, unless specified, we leverage the best setting (e.g., learning rate) to obtain the best results. All the numerical results in this section represent the average performance of three experiments using different random seeds. ... the local learning (ηl) rate picks the best one from the set {0.1, 0.03, 0.02, 0.01, 0.008, 0.005}, while the global learning rate (ηs) is selected from the set {1.0, 0.8, 0.1}.