Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class
Authors: Khoa D Doan, Yingjie Lao, Ping Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show empirically that the proposed framework achieves high attack performance (e.g., 100% attack success rates in several experiments) while preserving the cleandata performance in several benchmark datasets, including MNIST, CIFAR10, GTSRB, and Tiny Image Net. |
| Researcher Affiliation | Collaboration | Khoa D. Doan1, Yingjie Lao2, Ping Li3 1College of Engineering and Computer Science, Vin University 2Electrical and Computer Engineering, Clemson University, USA 3Linked In Ads, 700 Bellevue Way NE, Bellevue, WA 98004, USA |
| Pseudocode | Yes | 4.4 Marksman s Optimization Algorithm 1 Marksman Optimization Algorithm |
| Open Source Code | Yes | 3. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See the supplementary material. (...) 4. (c) Did you include any new assets either in the supplemental material or as a URL)? [Yes] Codes are submitted as supplementary material. |
| Open Datasets | Yes | We demonstrate the effectiveness of the proposed Marksman backdoor through comprehensive experiments on several widely-used datasets for backdoor attack study: MNIST, CIFAR10, GTSRB, and Tiny Imagenet (T-IMNET). |
| Dataset Splits | Yes | 3. (b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See the supplementary material. |
| Hardware Specification | Yes | 3. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See the supplementary material. |
| Software Dependencies | No | The paper mentions using a 'momentum SGD optimizer' but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions). |
| Experiment Setup | Yes | Hyperparameters: For each method, we train the classifiers using the momentum SGD optimizer (initial learning rate of 0.01, and after every 100 epochs, the learning rate is decayed by a factor of 0.1). For Marksman, we train f and T alternately but update T slowly after every epoch. To achieve a high-degree visual stealthiness of Marksman, we select ϵ as small as 0.05 for all datasets. We perform grid searches to select the best α and β using the CIFAR10 dataset and use these values for all the experiments. |