Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness
Authors: Yuancheng Xu, Yanchao Sun, Micah Goldblum, Tom Goldstein, Furong Huang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on the CIFAR-10 and Tiny-Image Net datasets verify that Dy ART alleviates the conflicting dynamics of the decision boundary and obtains improved robustness under various perturbation sizes compared to the state-of-the-art defenses. |
| Researcher Affiliation | Academia | University of Maryland, College Park New York University {ycxu,ycs,tomg,furongh}@umd.edu goldblum@nyu.edu |
| Pseudocode | No | The paper describes methods and processes in narrative text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/Yuancheng-Xu/Dynamics-Aware-Robust-Training. |
| Open Datasets | Yes | Experiments on the CIFAR-10 (Krizhevsky et al., 2009) and Tiny-Image Net (Deng et al., 2009) datasets |
| Dataset Splits | Yes | To alleviate robust overfitting (Rice et al., 2020), we perform early stopping on a validation set of size 1024 using projected gradient descent (PGD) attacks with 20 steps. |
| Hardware Specification | Yes | All experiments are run on NVIDIA Ge Force RTX 2080 Ti GPU. ... We use one NVDIA RTX A4000. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Opacus package (Yousefpour et al., 2021)' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | Models are trained using stochastic gradient descent with momentum 0.9 and weight decay 0.0005 with batch size 256 for 200 epochs on CIFAR-10 and 100 epochs on Tiny-Image Net. We use a cosine learning rate schedule (Loshchilov & Hutter, 2016) without restarts where the initial learning rate is set to 0.1 for all baselines and Dy ART. To alleviate robust overfitting (Rice et al., 2020), we perform early stopping on a validation set of size 1024 using projected gradient descent (PGD) attacks with 20 steps. |