FedInv: Byzantine-Robust Federated Learning by Inversing Local Model Updates
Authors: Bo Zhao, Peng Sun, Tao Wang, Keyu Jiang9171-9179
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct an exhaustive experimental evaluation of Fed Inv. The results demonstrate that Fed Inv significantly outperforms the existing robust FL schemes in defending against stealthy poisoning attacks under highly non-IID data partitions. |
| Researcher Affiliation | Collaboration | 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China 2School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China 3Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS), China |
| Pseudocode | Yes | Algorithm 1: Fed Inv |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | MNIST (Le Cun, Cortes, and Burges 1998): 10-class handwritten digit image classification dataset consisting of 60000 training samples and 10000 testing samples. Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017): 10class fashion image classification dataset consisting of 60000 training samples and 10000 testing samples. HAR (Anguita et al. 2013): 6-class human activity recognition dataset collected via sensors embedded in 30 users smartphones (21 users datasets used for training and 9 for testing, and each has around 300 data samples). |
| Dataset Splits | No | The paper specifies training and testing sample counts for MNIST, Fashion-MNIST, and HAR datasets, but does not explicitly provide details about a separate validation split or its size/methodology. |
| Hardware Specification | Yes | We conduct experiments using Py Torch 1.5.1 on a machine with a TITAN RTX GPU, two 12-core 2.5GHz CPUs, and 148GB virtual RAM. |
| Software Dependencies | Yes | We conduct experiments using Py Torch 1.5.1 on a machine with a TITAN RTX GPU, two 12-core 2.5GHz CPUs, and 148GB virtual RAM. |
| Experiment Setup | Yes | Parameter Settings of FL Model Training We train the global models for 20 communication rounds. In each round, each client performs E = 10 epochs of local model updates via mini-batch SGD with a batch size of B = 100 for MNIST and Fashion-MNIST and B = 10 for HAR. Other hyperparameters during model training are inherited from the default settings of Adam (Kingma and Ba 2014). |