Towards Stable Backdoor Purification through Feature Shift Tuning

Authors: Rui Min, Zeyu Qin, Li Shen, Minhao Cheng

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our FST provides consistently stable performance under different attack settings.
Researcher Affiliation Collaboration 1Department of Computer Science & Engineering, HKUST 2JD Explore Academy
Pseudocode Yes Algorithm 1 Feature Shift Tuning (FST)
Open Source Code Yes Our codes are available at https: //github.com/AISafety-HKUST/stable_backdoor_purification.
Open Datasets Yes We conduct experiments on four widely used image classification datasets, CIFAR-10 [15], GTSRB [32], Tiny-Image Net [8], and CIFAR-100 [15].
Dataset Splits Yes Following the previous work [41] and leave 2% of original training data as the tuning dataset. For the CIFAR100 and Tiny-Image Net, we note that a small tuning dataset would hurt the model performance and therefore we increase the tuning dataset to 5% of the training set.
Hardware Specification Yes We conducted all the experiments with 4 NVIDIA 3090 GPUs.
Software Dependencies No The paper mentions 'Py Torch' as a provider of pre-trained weights but does not specify version numbers for PyTorch or any other software dependencies crucial for replication.
Experiment Setup Yes For our FST, we adopt SGD with an initial learning rate of 0.01 and set the momentum as 0.9 for both CIFAR-10 and GTSRB datasets and decrease the learning rate to 0.001 for both CIFAR-100 and Tiny-Image Net datasets to prevent the large degradation of the original performance. We fine-tune the models for 10 epochs on the CIFAR-10; 15 epochs on the GTSRB, CIFAR-100 and Tiny-Image Net. We set the α as 0.2 for CIFAR-10; 0.1 for GTSRB; 0.001 for both the CIFAR-100 and Tiny-Image Net.