Backdoor Scanning for Deep Neural Networks through K-Arm Optimization

Authors: Guangyu Shen, Yingqi Liu, Guanhong Tao, Shengwei An, Qiuling Xu, Siyuan Cheng, Shiqing Ma, Xiangyu Zhang

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental At the time of submission, the evaluation of our method on over 4000 models in the IARPA Troj AI competition from round 1 to the latest round 4 achieves top performance on the leaderboard. Our technique also supersedes five state-of-the-art techniques in terms of accuracy and the scanning time needed.
Researcher Affiliation Academia 1Department of Computer Science, Purdue University, West Lafayette, IN, USA 2Department of Computer Science, Rutgers University, Piscataway, NJ, USA.
Pseudocode No The paper describes the method's steps in text and diagrams (e.g., Figure 4), but it does not include a formal pseudocode block or algorithm listing.
Open Source Code Yes The code of our work is available at https://github.com/PurduePAML/K-ARM_Backdoor_Optimization
Open Datasets Yes We evaluate our prototype on 4000 models from IARPA Troj AI round 1 to the latest round 4 competitions, and a few complex models on Image Net. Our technique achieved top performance on the Troj AI leaderboard and reached the round targets on the Troj AI test server for all rounds. Image Net. We also use 7 VGG16 models on Image Net (1000 classes) trojaned by Troj NN (Liu et al., 2018b), a kind of unviersal patch attack, and 6 models on Image Net poisoned by hidden-trigger backdoors (Saha et al., 2020), with different structures including VGG16, Alex Net, Dense Net, Inception, Res Net and Squeeze Net. Other datasets. We also evaluate our method on 4 CIFAR10 and 4 GTSRB models trojaned by Input-Aware Dynamic Attack (Nguyen & Tran, 2020).
Dataset Splits No The paper mentions 'training sets' and 'test set' for datasets but does not explicitly specify a 'validation' dataset split with percentages or sample counts.
Hardware Specification Yes For fair comparison, comparative experiments are all done on an identical machine with a single 24GB memory NVIDIA Quadro RTX 6000 GPU (with the lab server configuration). Leaderboard results (on Troj AI test sets) were run on the IARPA server with a single 32GB memory NVIDIA V100 GPU.
Software Dependencies No The paper mentions using 'Adam (Kingma & Ba, 2014) optimizer' but does not specify its version number or versions for other key software libraries or dependencies.
Experiment Setup Yes We use Adam (Kingma & Ba, 2014) optimizer with learning rate 0.1, β = {0.5, 0.9} for all the experiments. The optimizer may return failure for the current round when the budget for the label runs out (which is 10 epochs in this paper). In this paper, we set = 300 for all Troj AI models and = 350 for Image Net models.