Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100
Authors: Sahil Singla, Surbhi Singla, Soheil Feizi
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On CIFAR-10, we achieve significant improvements over prior works in provable robust accuracy (5.81%) with only a minor drop in standard accuracy ( 0.29%). Code for reproducing all experiments in the paper is available at https://github.com/singlasahil14/SOC. We perform experiments under the setting of provably robust image classification on CIFAR-10 and CIFAR-100 datasets... |
| Researcher Affiliation | Academia | Sahil Singla1, Surbhi Singla2, Soheil Feizi1 University of Maryland, College Park {ssingla,sfeizi}@umd.edu1, surbhisingla1995@gmail.com2 |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks found. |
| Open Source Code | Yes | Code for reproducing all experiments in the paper is available at https://github.com/singlasahil14/SOC. |
| Open Datasets | Yes | We perform experiments under the setting of provably robust image classification on CIFAR-10 and CIFAR-100 datasets |
| Dataset Splits | Yes | Using 5000 held out samples from CIFAR-10, we tested 6 different values of γ shown in Table 3 and selected γ = 0.5 because it resulted in less than 0.5% decrease in standard accuracy while 4.96% increase in provably robust accuracy. |
| Hardware Specification | Yes | All experiments were performed using 1 NVIDIA GeForce RTX 2080 Ti GPU. |
| Software Dependencies | No | No specific software dependencies with version numbers are explicitly listed in the paper. |
| Experiment Setup | Yes | All networks were trained for 200 epochs with initial learning rate of 0.1, dropped by a factor of 0.1 after 100 and 150 epochs. For Certificate Regularization (or CR), we set the parameter γ = 0.5. |