Efficient Device Scheduling with Multi-Job Federated Learning

Authors: Chendi Zhou, Ji Liu, Juncheng Jia, Jingbo Zhou, Yang Zhou, Huaiyu Dai, Dejing Dou9971-9979

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experimentation with multiple jobs and datasets. The experimental results show that our proposed approaches significantly outperform baseline approaches in terms of training time (up to 8.67 times faster) and accuracy (up to 44.6% higher).
Researcher Affiliation Collaboration 1Soochow University, 2Baidu Inc., China, 3Auburn University, 4North Carolina State University, United States
Pseudocode Yes Algorithm 1: Bayesian Optimization-Based Scheduling; Algorithm 2: Reinforcement Learning-Based Scheduling
Open Source Code No The paper does not provide a direct link or explicit statement about the availability of its source code.
Open Datasets Yes We exploit the datasets of CIFAR-10 (Krizhevsky and Hinton 2009), emnist-letters (Cohen et al. 2017), emnist-digital (Cohen et al. 2017), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), and MNIST (Le Cun et al. 1998) in the training process.
Dataset Splits No The paper describes how data is prepared for devices (IID and non-IID settings) but does not provide specific percentages or sample counts for overall training, validation, and test dataset splits.
Hardware Specification Yes In addition, we use 12 Tesla V100 GPUs to simulate an FL environment composed of a parameter server and 100 devices.
Software Dependencies No The paper does not specify any software dependencies with version numbers.
Experiment Setup No The paper describes the jobs, models, and datasets used but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations.