CAT: Customized Adversarial Training for Improved Robustness
Authors: Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | through extensive experiments, we show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Engineering, HKUST 2Department of Electrical and Computer Engineering, Princeton University 3Department of Computer Science, UT Austin 4Department of Computer Science, UCLA 5IBM Research AI 6Amazon |
| Pseudocode | Yes | Algorithm 1 CAT algorithm |
| Open Source Code | Yes | Our code is publicly available at https://github.com/cmhcbb/CAT-Customized-Adversarial-Training-for-Improved-Robustness. |
| Open Datasets | Yes | We use CIFAR-10 dataset for performance evaluation. |
| Dataset Splits | No | The paper mentions using CIFAR-10 and standard models from other works, implying standard splits, but does not explicitly state the training, validation, and test dataset splits (e.g., percentages or counts) within the text. |
| Hardware Specification | Yes | All our experiments were implemented in Pytorch-1.4 and conducted using a GTX 2080 TI GPU. |
| Software Dependencies | Yes | All our experiments were implemented in Pytorch-1.4 |
| Experiment Setup | Yes | We set the number of iterations in adversarial attack to be 10 for all methods during training. Adversarial training and TRADES are trained on PGD attacks setting ϵ = 8/255 with cross entropy loss (CE). All the models are trained using SGD with momentum 0.9, weight decay 5 10 4. For VGG-16/Wide Res Net models, we use the initial learning rate of 0.01/0.1, and we decay the learning rate by 90% at the 80th, 140th, and 180th epoch. For CAT, we set epsilon scheduling parameter η = 0.005, ϵmax = 8/255 and weighting parameter c = 10. We set β = 1 for the distribution Dirichlet(β), which is equal to a uniform distribution. Also, we set κ = 10. |