Adversarial Feature Map Pruning for Backdoor

Authors: Dong HUANG, Qingwen Bu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the effectiveness of the FMP, we conduct extensive experimental evaluations on multiple benchmark datasets, including CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009), and GTSRB (Stallkamp et al., 2012), under diverse attack scenarios. In our experiments, we consider various backdoor triggers with different complexities and compare the performance of FMP against several state-of-the-art backdoor defense methods. Models are evaluated with three primary metrics: Accuracy, Attack Success Rate (ASR), and Robust Accuracy (RA).
Researcher Affiliation Collaboration Dong Huang1 , Qingwen Bu2, 3 1University of Hong Kong, 2Shanghai AI Laboratory, 3Shanghai Jiao Tong University
Pseudocode Yes Algorithm 1: Feature Reverse Generation
Open Source Code Yes Our code is publicly available at: https://github.com/hku-systems/FMP.
Open Datasets Yes To validate the effectiveness of the FMP, we conduct extensive experimental evaluations on multiple benchmark datasets, including CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009), and GTSRB (Stallkamp et al., 2012), under diverse attack scenarios.
Dataset Splits Yes We train the CIFAR10 and CIFAR100 datasets with 100 epochs, SGD momentum of 0.9, learning rate of 0.01, and batch size of 128, using the Cosine Annealing LR scheduler. The GTSRB dataset is trained with 50 epochs. We set the default ratio of the retraining data set at 10% to ensure a fair and consistent evaluation of defense strategies.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions software components and optimizers (e.g., 'SGD momentum', 'Cosine Annealing LR scheduler', 'Stochastic Gradient Descent (SGD) optimizer') and a benchmark ('Backdoor Bench'), but it does not provide specific version numbers for any software, libraries, or frameworks used.
Experiment Setup Yes We train the CIFAR10 and CIFAR100 datasets with 100 epochs, SGD momentum of 0.9, learning rate of 0.01, and batch size of 128, using the Cosine Annealing LR scheduler. The GTSRB dataset is trained with 50 epochs. We set the poisoning rate to 10% by default. ... In order to repair the model, we adopt a configuration consisting of 10 epochs, a batch size of 256, and a learning rate of 0.01. The Cosine Annealing LR scheduler is employed alongside the Stochastic Gradient Descent (SGD) optimizer with a momentum of 0.9 for the client optimizer. We set the default ratio of the retraining data set at 10%... For FMP, we set p equal to 64 and ϵ as 1/255.