Orion: Online Backdoor Sample Detection via Evolution Deviance
Authors: Huayang Huang, Qian Wang, Xueluan Gong, Tao Wang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on six attacks, three datasets, and two architectures verify the effectiveness of Orion. It is shown that Orion outperforms state-of-the-art defenses and can identify feature-hidden attacks with an F1-score of 90%, compared to 40% for other detection schemes. |
| Researcher Affiliation | Academia | 1Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Hubei, China 2School of Computer Science, Wuhan University, Hubei, China |
| Pseudocode | No | No explicit pseudocode or algorithm blocks (e.g., labeled 'Algorithm X' or 'Pseudocode') were found in the paper. The methodology is described in prose. |
| Open Source Code | No | The paper does not include any explicit statements about releasing source code for the described methodology or provide a link to a code repository. |
| Open Datasets | Yes | We perform experiments on three datasets CIFAR-10, GTSRB and Tiny-Imagenet [Krizhevsky and Hinton, 2009; Stallkamp et al., 2012; Le and Yang, 2015]. |
| Dataset Splits | Yes | CIFAR-10 and GTSRB have an image size of 32x32 and contain 10 and 43 classes, respectively. CIFAR-10 has 50,000 and 10,000 samples for training and testing. GTSRB contains 39,209 training and 12,630 validation images. Tiny-Imagenet is a subset of Image Net, containing 200 classes. Each class contains 500 training data and 50 test samples. |
| Hardware Specification | Yes | All the experiments are carried out on a single NVIDIA Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and model architectures like 'VGG16-bn' and 'Res Net-56', but does not provide specific version numbers for any libraries, frameworks (e.g., PyTorch, TensorFlow), or other software dependencies. |
| Experiment Setup | Yes | We adopt the Adam optimizer to train each S-Net for 25 epochs, with a learning rate of 0.001. |