Robustra: Training Provable Robust Neural Networks over Reference Adversarial Space
Authors: Linyi Li, Zexuan Zhong, Bo Li, Tao Xie
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The evaluation results show that our approach can provide significantly better provable adversarial error bounds on MNIST and CIFAR10 datasets, compared to the state-of-the-art results. |
| Researcher Affiliation | Academia | Linyi Li , Zexuan Zhong , Bo Li and Tao Xie University of Illinois at Urbana-Champaign {linyi2, zexuan2, lbo, taoxie}@illinois.edu |
| Pseudocode | No | The paper describes algorithms and formulations but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Our code and model weights are available at https://github.com/llylly/Robustra. |
| Open Datasets | Yes | We evaluate Robustra on image classification tasks with two datasets: MNIST [Le Cun et al., 1998] and CIFAR10 [Krizhevsky and Hinton, 2009]. |
| Dataset Splits | No | The paper mentions 'training set' and 'test set' but does not explicitly describe a validation split or its size/percentage. |
| Hardware Specification | Yes | All experiments are run on Geforce GTX 1080 Ti GPUs. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and 'SGD optimizer' but does not specify software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | For each model and dataset, we run 100 epochs on the training set. Adam optimizer is used for MNIST models, and SGD optimizer (0.9 momentum, 5 10 4 weight decay) is used for CIFAR10 models. The ℓ norm ϵ is initialized by 0.01, and then it linearly increases to the configured ϵ (0.1 or 0.3 for MNIST, 2/255 or 8/255 for CIFAR10) in the first 20 epochs. In the first 20 epochs, the learning rate is set to be 0.001; then, it decades by half every 10 epochs. The batch size is set to 50. |