Progressive Poisoned Data Isolation for Training-Time Backdoor Defense

Authors: Yiming Chen, Haiwei Wu, Jiantao Zhou

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple benchmark datasets and DNN models, assessed against nine state-of-the-art backdoor attacks, demonstrate the superior performance of our PIPD method for backdoor defense. For instance, our PIPD achieves an average True Positive Rate (TPR) of 99.95% and an average False Positive Rate (FPR) of 0.06% for diverse attacks over CIFAR-10 dataset, markedly surpassing the performance of state-of-the-art methods.
Researcher Affiliation Academia Yiming Chen, Haiwei Wu, and Jiantao Zhou State Key Laboratory of Internet of Things for Smart City Department of Computer and Information Science, University of Macau {yc17486, yc07912, jtzhou}@um.edu.mo
Pseudocode Yes Algorithm of our PIPD is shown in Appendix A.
Open Source Code Yes The code is available at https://github.com/Rorschach Chen/PIPD.git.
Open Datasets Yes we conduct experiments over the CIFAR-10 (Krizhevsky, Hinton et al. 2009) and a subset of Image Net (Deng et al. 2009) datasets.
Dataset Splits No The paper refers to 'train' and 'test' datasets but does not explicitly mention a 'validation' dataset or its split for hyperparameter tuning or early stopping.
Hardware Specification No The paper does not explicitly describe the hardware used for running experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions 'PyTorch' in its references, but does not explicitly state specific version numbers for PyTorch or any other software dependencies used in their experimental setup.
Experiment Setup Yes Implementation Details: We employ Res Net-18 (He et al. 2016) as our default network. During the one-step isolation process, we extract the feature maps subsequent to each convolutional layer. The pre-isolation epoch is designated at 200, with the progressive iteration number T set to 8, and the epochs for selective training is 20.