EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning
Authors: Syed Irfan Ali Meerza, Jian Liu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three datasets demonstrate the effectiveness and efficiency of our attack, even with state-of-the-art fairness optimization algorithms and secure aggregation rules employed. |
| Researcher Affiliation | Academia | Syed Irfan Ali Meerza and Jian Liu University of Tennessee, Knoxville smeerza@vols.utk.edu, jliu@utk.edu, |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/irfan Mee/EAB-FL |
| Open Datasets | Yes | We evaluate the proposed EAB-FL using the following three datasets in non-IID settings: (1) Celeb A [Liu et al., 2018]: (2) Adult Income [Dua and Graff, 2017]: (3) UTK Faces [Zhang et al., 2017]: To show the real-world implications, we also apply EAB-FL to the Movie Lens 1M dataset [Harper and Konstan, 2015] |
| Dataset Splits | No | The server can evaluate the accuracy of the submitted model updates on a validation set. The paper mentions the use of a validation set but does not provide specific details on how the dataset was split into training, validation, and testing subsets, such as percentages or sample counts. |
| Hardware Specification | Yes | Table 3 shows the average time required to successfully attack the global model per communication round on the Celeb A dataset using an Nvidia Quadro A100 GPU. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries). |
| Experiment Setup | No | The paper describes the attack's optimization problem with parameters like γ and ρ (Equation 7) and κ for biasing dataset selection. However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings in the main text. |