A Closer Look at Curriculum Adversarial Training: From an Online Perspective
Authors: Lianghe Shi, Weiwei Liu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive experiments to support our theory. |
| Researcher Affiliation | Academia | School of Computer Science, Wuhan University Institute of Artificial Intelligence, Wuhan University National Engineering Research Center for Multimedia Software, Wuhan University Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is attached to the supplementary material. |
| Open Datasets | Yes | Extensive numerical experiments on CIFAR-10 and CIFAR-100 (Krizhevsky, Hinton et al. 2009) datasets verify our theoretical bounds and the explanation we provide regarding curriculum adversarial training. |
| Dataset Splits | No | Extensive numerical experiments on CIFAR-10 and CIFAR-100 (Krizhevsky, Hinton et al. 2009) datasets verify our theoretical bounds and the explanation we provide regarding curriculum adversarial training. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | For all baselines, we run projected gradient descent (PGD) as our adversary... the models are trained using stochastic gradient descent (SGD)... |
| Experiment Setup | Yes | For all baselines, we run projected gradient descent (PGD) as our adversary, with a step size of 0.007. The maximum step of PGD is 20, and the maximum radius of the l -norm bounded perturbation is δ = 0.031. Following Zhang et al. (2020), the models are trained using stochastic gradient descent (SGD) with momentum of 0.9 for 120 epochs. The initial learning rate is 0.1, reduced to 0.01, 0.001, and 0.0005 at epoch 60, 90, and 110, respectively. The batch size is 128. |