Hidden Trigger Backdoor Attacks
Authors: Aniruddha Saha, Akshayvarun Subramanya, Hamed Pirsiavash11957-11965
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks. [...] 4. Experiments |
| Researcher Affiliation | Academia | Aniruddha Saha, Akshayvarun Subramanya, Hamed Pirsiavash University of Maryland, Baltimore County {anisaha1, akshayv1, hpirsiav}@umbc.edu |
| Pseudocode | Yes | Algorithm 1: Generating poisoning data |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing the code for the methodology described, nor does it include a link to a code repository. |
| Open Datasets | Yes | We divide the Image Net data to three sets for each category: 200 images for generating the poisoned data, 800 images for training the binary classifier, and 100 images for testing the binary classifier. [...] We also use CIFAR10 dataset for the experiments in Section 4.4 |
| Dataset Splits | Yes | Dataset: Since we want to have separate datasets for generating poisoned data and finetuning the binary model, we divide the Image Net data to three sets for each category: 200 images for generating the poisoned data, 800 images for training the binary classifier, and 100 images for testing the binary classifier. [...] For each category, we have 1,500 images to train the poisoned data, 1,500 images for finetuning, and 1,000 images for evaluation. These three sets are disjoint. |
| Hardware Specification | Yes | It takes about 5 minutes to generate 100 poisoned images on a single NVIDIA Titan X GPU. |
| Software Dependencies | No | The paper mentions using Alex Net and the PGD attack, but does not provide specific version numbers for any software libraries, frameworks, or programming languages (e.g., PyTorch, TensorFlow, Python version). |
| Experiment Setup | Yes | For our Image Net experiments we set a reference parameter set where the perturbation ϵ = 16, trigger size is 30x30 (while images are 224x224), and we randomly choose a location to paste the trigger on the source image. We generate 100 poisoned examples and add to our target class training set of size 800 images during finetuning. Thus about 12.5% of the target data is poisoned. To generate our poisoned images, we run Algorithm 1 with mini-batch gradient descent for 5,000 iterations with a batch size of K = 100. We use an initial learning rate of 0.01 with a decay schedule parameter of 0.95 every 2,000 iterations. |