BADFSS: Backdoor Attacks on Federated Self-Supervised Learning

Authors: Jiale Zhang, Chengcheng Zhu, Di Wu, Xiaobing Sun, Jianming Yong, Guodong Long

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate BADFSS from different perspectives. First, we compare its performance with state-of-the-art SSL backdoor attacks, which are implemented in FL. Then, we measure the effectiveness of BADFSS under various SSL and FL settings. Finally, we do ablation studies to find out how the parameters influence the performance of BADFSS.
Researcher Affiliation Academia 1School of Information Engineering, Yangzhou University, China 2School of Mathematics, Physics and Computing, University of Southern Queensland, Australia 3School of Business, University of Southern Queensland, Australia 4Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney, Australia
Pseudocode No The paper includes a figure (Figure 2: Framework of BADFSS) illustrating the methodology, but it does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an unambiguous statement of code release or a link to a source code repository for the methodology described.
Open Datasets Yes We conduct experiments on four public datasets, i.e., CIFAR-10 [Krizhevsky et al., 2009], GTSRB [Stallkamp et al., 2012], CIFAR-100 [Krizhevsky et al., 2009], and Tiny-Image Net.
Dataset Splits Yes Then, we freeze the encoder and train a new linear classifier using a small labeled subset of the datasets (1% or 10%).
Hardware Specification Yes To simulate federated learning, we train each client on one NVIDIA V100 GPU
Software Dependencies No We implement BADFSS in Python using Py Torch framework.
Experiment Setup Yes Unless otherwise mentioned, we use Mo Co-v2 as the default self-supervised learning algorithm and employ Res Net-18 [He et al., 2016] as the default architecture network for the encoders. Moreover, we use a twolayer multi-layer perceptron (MLP) as a predictor. Following previous work [Zhang et al., 2020; Zhuang et al., 2020; Zhuang et al., 2021], we use decay rate m = 0.99, batch size B = 128, SGD as optimizer with learning rate η = 0.032 and run experiments with K = 5 clients (one is malicious and the poison ratio is 1%) for R = 100 training rounds, where each client performs E = 5 local epochs in each round.