Imperceptible Backdoor Attack: From Input Space to Feature Representation

Authors: Nan Zhong, Zhenxing Qian, Xinpeng Zhang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments including different datasets and network structures to demonstrate the effectiveness and stealthiness of our approach. 4 Experiment Results, 4.1 Experimental Setup, 4.2 Attack Effectiveness and Visualization, 4.3 Defences, 4.4 Ablation Studies
Researcher Affiliation Academia School of Computer Science, Fudan University {nzhong20, zxqian, zhangxinpeng}@fudan.edu.cn
Pseudocode No The paper describes the method using concrete formulas and explains the procedure verbally (e.g., 'Sampling a multinomial distribution can be expressed by the following equation'), and provides Table 1 for the structure of 'Sample Net', but does not include a formal 'Algorithm' or 'Pseudocode' block.
Open Source Code Yes Our source code is available at https://github.com/Ekko-zn/IJCAI2022-Backdoor.
Open Datasets Yes We adopt two different datasets including GTSRB [Houben et al., 2013] and Celeb A [Liu et al., 2015]. (GTSRB and CelebA are well-known public datasets, and citations are provided).
Dataset Splits No The paper explicitly states 'The number of training samples and test samples are 39209 and 12630 for GTSRB, and 162084 and 40515 for Celeb A, respectively.' However, it does not explicitly mention a separate validation split or its size.
Hardware Specification Yes All experiments are conducted with Pytorch 1.10 version with an NVIDIA RTX3090.
Software Dependencies Yes All experiments are conducted with Pytorch 1.10 version with an NVIDIA RTX3090.
Experiment Setup Yes The batch size and learning rate are set as 16 and 1e-3, respectively. The hyperparameter α and β are set as 0.3 and 0.1, respectively. We keep α unchanged during the training process. We multiply β by 2 every 20 epochs. The total epochs are 110 and 50 for GTSRB and Celeb A, respectively. We adopt Adam optimizer and all experiments are conducted with Pytorch 1.10 version with an NVIDIA RTX3090.