Chronic Poisoning: Backdoor Attack against Split Learning
Authors: Fangchao Yu, Bo Zeng, Kai Zhao, Zhi Pang, Lina Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implement SFI on various benchmark datasets, and extensive experimental results demonstrate its effectiveness and generality. For example, success rates of our attack on MNIST, Fashion, and CIFAR10 datasets all exceed 90%, with limited impact on the main task. |
| Researcher Affiliation | Academia | Fangchao Yu, Bo Zeng, Kai Zhao, Zhi Pang, Lina Wang Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University {fangchao, bobozen, kizhao, zhipang, lnwang}@whu.edu.cn |
| Pseudocode | No | The paper describes the SFI framework in text and uses figures (Figure 1, Figure 2) to illustrate the overview and stages, but no pseudocode or algorithm blocks are provided. |
| Open Source Code | Yes | Our code is available from https://github.com/chaoge123456/chronicpoisoning. |
| Open Datasets | Yes | We conduct our experiments on four datasets: MNIST (Deng 2012), Fashion (Fashion-MNIST) (Xiao, Rasul, and Vollgraf 2017), CIFAR10, and CIFAR100 (Krizhevsky, Hinton et al. 2009). |
| Dataset Splits | No | The paper mentions `Dc` (client dataset) and `Ds` (shadow dataset) and describes how training data is distributed. However, it does not explicitly state the specific train/validation/test splits (e.g., percentages or counts) or how a validation set was created or used for hyperparameter tuning for their experiments. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We present the training hyperparameters and configurations in Table 1. Our attack framework consists of five models: Fc, Fs, Fm, Fa, and Fd. The main task of split learning involves training a Res Net18 model composed of Fc and Fm. We employ two different split strategies. The split point of Split A is before the first Res Block of Res Net18, and the split point of Split B is after the first Res Block of Res Net18. |