DarkFed: A Data-Free Backdoor Attack in Federated Learning

Authors: Minghui Li, Wei Wan, Yuxuan Ning, Shengshan Hu, Lulu Xue, Leo Yu Zhang, Yichen Wang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A substantial body of empirical evidence validates the tangible effectiveness of Dark Fed.
Researcher Affiliation Academia 1School of Software Engineering, Huazhong University of Science and Technology 2National Engineering Research Center for Big Data Technology and System 3Services Computing Technology and System Lab 4Hubei Engineering Research Center on Big Data Security 5Hubei Key Laboratory of Distributed System Security 6School of Cyber Science and Engineering, Huazhong University of Science and Technology 7School of Computer Science and Technology, Huazhong University of Science and Technology 8School of Information and Communication Technology, Griffith University
Pseudocode Yes Algorithm 1: A Complete Description of Dark Fed
Open Source Code Yes Our codes will be available at https://github.com/hustweiwan/Dark Fed.
Open Datasets Yes We consider three multichannel image classification datasets: CIFAR-10 [Krizhevsky and Hinton, 2009], CIFAR-100 [Krizhevsky and Hinton, 2009], and GTSRB [Stallkamp et al., 2011].
Dataset Splits No The paper mentions using well-known datasets like CIFAR-10, CIFAR-100, and GTSRB, which have standard splits. However, it does not explicitly provide the specific training/validation/test dataset splits used for its own experiments, nor does it specify any validation process (e.g., percentages, counts, or methodology like cross-validation).
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU/CPU models or detailed computer specifications.
Software Dependencies No The paper does not provide specific software dependencies, such as library names with version numbers (e.g., PyTorch 1.9), needed to replicate the experiment.
Experiment Setup Yes The parameter settings in Alg. 1 are delineated in Tab 4. One might wonder why the estimated cosine similarity between benign updates (i.e., α) consistently remains at 0 for different datasets.