Improved techniques for deterministic l2 robustness
Authors: Sahil Singla, Soheil Feizi
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using these methods, we significantly advance the state-of-the-art for standard and provable robust accuracies on CIFAR-10 (gains of +1.79% and +3.82%) and similarly on CIFAR-100 (+3.78% and +4.75%) across all networks. Code is available at https://github.com/singlasahil14/improved_l2_robustness. |
| Researcher Affiliation | Academia | Sahil Singla Department of Computer Science University of Maryland ssingla@umd.edu Soheil Feizi Department of Computer Science University of Maryland sfeizi@umd.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/singlasahil14/improved_l2_robustness. |
| Open Datasets | Yes | We perform experiments under the setting of provably robust image classification on CIFAR-10 and CIFAR-100 datasets. |
| Dataset Splits | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] |
| Hardware Specification | Yes | All experiments were performed using 1 NVIDIA GeForce RTX 2080 Ti GPU. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | All networks were trained for 200 epochs with initial learning rate of 0.1, dropped by a factor of 0.1 after 100 and 150 epochs. For adversarial training with curvature regularization, we use ρ = 36/255 (0.1411), γ = 0.5 for CIFAR-10 and ρ = 0.2, γ = 0.75 for CIFAR-100. |