MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Authors: Runtian Zhai, Chen Dan, Di He, Huan Zhang, Boqing Gong, Pradeep Ravikumar, Cho-Jui Hsieh, Liwei Wang
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, Image Net, MNIST, and SVHN. |
| Researcher Affiliation | Collaboration | 1Peking University 2CMU 3UCLA 4Google |
| Pseudocode | Yes | Algorithm 1 MACER: robust training via MAximizing CErtified Radius |
| Open Source Code | Yes | Our code is available at https://github.com/Runtian Z/macer. |
| Open Datasets | Yes | In our experiments, we show that our method can be applied to modern deep neural networks on a wide range of datasets, including Cifar-10, Image Net, MNIST, and SVHN. |
| Dataset Splits | No | The paper mentions a training set and a test set, but does not explicitly provide details for a validation set split (e.g., percentages or sample counts). |
| Hardware Specification | Yes | For Cifar-10 we use one NVIDIA P100 GPU and for Image Net we use four NVIDIA P100 GPUs. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their specific versions). |
| Experiment Setup | Yes | For Cifar-10, MNIST and SVHN, we train the models for 440 epochs using our proposed algorithm. The learning rate is initialized to be 0.01, and is decayed by 0.1 at the 200th/400th epoch. For all the models, we use k = 16, γ = 8.0 and β = 16.0. |