Shielding Federated Learning: Robust Aggregation with Adaptive Client Selection
Authors: Wei Wan, Shengshan Hu, jianrong Lu, Leo Yu Zhang, Hai Jin, Yuanyuan He
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results show that MAB-RFL outperforms existing defenses in three attack scenarios under different percentages of attackers. |
| Researcher Affiliation | Academia | 1School of Cyber Science and Engineering, Huazhong University of Science and Technology 2School of Computer Science and Technology, Huazhong University of Science and Technology 3National Engineering Research Center for Big Data Technology and System 4Services Computing Technology and System Lab 5Hubei Engineering Research Center on Big Data Security 6Cluster and Grid Computing Lab 7School of Information Technology, Deakin University |
| Pseudocode | Yes | Algorithm 1 Thompson sampling for the Bernoulli bandit; Alg. 2 A Complete Description of MAB-RFL; Algorithm 3 ACS (Adaptive Client Selection); Algorithm 4 ISA (Identifying Sybil Attacks); Algorithm 5 INSA (Indentifying Non-Sybil Attacks). |
| Open Source Code | No | The paper does not contain any explicit statements about making the source code for their proposed method publicly available, nor does it provide any links to a code repository. |
| Open Datasets | Yes | Datasets and Models. We evaluate MAB-RFL on MNIST and CIFAR-10. |
| Dataset Splits | No | The paper describes how training data is distributed among clients ('the training set size of each client is randomly chosen') and mentions the total number of iterations, but it does not specify explicit train/validation/test splits with percentages or counts for distinct sets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers, such as programming languages, libraries, or frameworks used for implementation. |
| Experiment Setup | Yes | We set the number of clients K = 50 for both datasets. To reduce the total communication rounds between clients and the server, we set the local epoch of each client to be 3. The total iteration T = 100. The importance of historical information λ = 0.1. For MNIST, we set the estimated maximum cosine similarity cmax = 0.7, minimum cosine similarity cmin = 0.3, and the acceptable difference between clusters α = 0.1. For CIFAR-10, we set cmax = 0.3, cmin = 0.1, α = 0. |