An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

Authors: Sadegh Farhadkhani, Rachid Guerraoui, Lê Nguyên Hoang, Oscar Villemaud

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, using our equivalence, we derive a practical attack that we show (theoretically and empirically) can be very effective against classical personalized federated learning models. Our experiment also shows the effectiveness of a simple protection, which prevents attackers from arbitrarily manipulating the trained algorithm.
Researcher Affiliation Academia IC Schoold, EPFL, Lausanne, Switzerland.
Pseudocode No No structured pseudocode or algorithm blocks were found.
Open Source Code Yes The code can be found at https://github.com/ LPD-EPFL/Attack_Equivalence.
Open Datasets Yes We deployed CGA to bias the federated learning of MNIST. We constructed a setting where 10 idle users draw randomly 10 data points from the Fashion MNIST dataset. We considered VGG 13-BN, which was pretrained on cifar-10 by (Phan, 2021).
Dataset Splits No The paper mentions 'training set' and 'test dataset' but does not specify validation splits or numerical proportions for train/val/test splits.
Hardware Specification No No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) were found.
Software Dependencies No The paper mentions 'Pytorch' in a citation (Phan, 2021) related to a pretrained model, but does not provide specific version numbers for the software dependencies of its own implementation.
Experiment Setup Yes We use λ = 1, Adam optimizer and a decreasing learning rate.