Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios

Authors: Zhen Xiang, David Miller, George Kesidis

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The excellent performance of our method is demonstrated on six benchmark datasets. Notably, our detection framework is also applicable to multi-class scenarios with multiple attacks. Code is available at https://github.com/zhenxianglance/2Class BADetection.
Researcher Affiliation Academia Zhen Xiang, David J. Miller & George Kesidis School of EECS Pennsylvania State University {zux49, djm25, gik2}@psu.edu
Pseudocode Yes Algorithm 1 BA detection using ET statistics.
Open Source Code Yes Code is available at https://github.com/zhenxianglance/2Class BADetection.
Open Datasets Yes Our experiments involve six common benchmark image datasets with a variety of image size and color scale: CIFAR-10, CIFAR-100 Krizhevsky (2012), STL-10 Coates et al. (2011), Tiny Image Net, FMNIST Xiao et al. (2017), MNIST Lecun et al. (1998). All the datasets are associated with the torchvision package, except for that STL-10 is downloaded from the official website https://cs.stanford. edu/ acoates/stl10/.
Dataset Splits No The paper mentions using "the original train-test split" for datasets but does not explicitly describe a separate validation split or its percentages/counts. It states: "For each generated 2-class domain, we use the subset of data associated with these two (super) classes from the original dataset, with the original train-test split." (Apdx D.2)
Hardware Specification Yes Execution time is measured on a dual card RTX2080-Ti (11GB) GPU.
Software Dependencies No The paper mentions software components like 'torchvision package', 'Adam', 'stochastic gradient descent (SGD)', but does not provide specific version numbers for these dependencies.
Experiment Setup Yes In Tab. 6, we show the training details including learning rate, batch size, number of epochs, whether or not using training data augmentation, choice of optimizer (Adam D. P. Kingma (2015) or stochastic gradient descent (SGD)) for 2-class domains generated from CIFAR-10, CIFAR-100, STL-10, Tiny Image Net, FMNIST, and MNIST, respectively. Table 6 provides specific values for these parameters.