FedMut: Generalized Federated Learning via Stochastic Mutation
Authors: Ming Hu, Yue Cao, Anran Li, Zhiming Li, Chengwei Liu, Tianlin Li, Mingsong Chen, Yang Liu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on well-known datasets demonstrate the effectiveness of our Fed Mut approach in various data heterogeneity scenarios. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Nanyang Technological University, Singapore 2Mo E Engineering Research Center of SW/HW Co-Design Tech. and App., East China Normal University, China |
| Pseudocode | Yes | Algorithm 1: Implementation of Fed Mut |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of its own code. |
| Open Datasets | Yes | We selected three well-known datasets to evaluate the effectiveness of our Fed Mut approach, i.e., CIFAR-10, CIFAR-100 (Krizhevsky 2009), and Shakespeare (Caldas et al. 2018), respectively, where CIFAR-10 and CIFAR-100 are image datasets and Shakespeare is a text dataset. |
| Dataset Splits | No | The paper describes training and testing, but it does not explicitly specify a validation dataset split or its size/percentage. |
| Hardware Specification | Yes | We conducted all the experiments on an Ubuntu workstation with an Intel i9 CPU, 64GB memory, and two NVIDIA RTX 4090 GPUs. |
| Software Dependencies | No | The paper mentions software components like 'SGD optimizer' and models like 'Res Net-18' and 'VGG16 (Torchvision Model 2022)', but does not specify versions for programming languages or libraries (e.g., Python, PyTorch). |
| Experiment Setup | Yes | For all the experiments, we set SGD optimizer with a learning rate of 0.01 and a momentum of 0.9. For each FL training round, we set the batch size to 50 and the number epoch of each local training to 5. |