Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Training on Foveated Images Improves Robustness to Adversarial Attacks
Authors: Muhammad Shah, Aqsa Kashaf, Bhiksha Raj
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that compared to DNNs trained on the original images, DNNs trained on images transformed by R-Blur are substantially more robust to adversarial attacks, as well as other, non-adversarial, corruptions, achieving up to 25% higher accuracy on perturbed data. |
| Researcher Affiliation | Collaboration | Muhammad A. Shah Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 EMAIL; Aqsa Kashaf Byte Dance San Jose, CA 95110 EMAIL; Bhiksha Raj Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 EMAIL |
| Pseudocode | No | The paper describes the operations of R-Blur verbally and through figures (e.g., Figure 1) and mathematical equations, but does not provide a formal pseudocode or algorithm block. |
| Open Source Code | Yes | The code for R-Blur is available at https://github.com/ahmedshah1494/RBlur |
| Open Datasets | Yes | Datasets: We use natural image datasets, namely CIFAR-10 [35], Imagenet ILSVRC 2012 [37], Ecoset [36] and a 10-class subset of Ecoset (Ecoset-10). |
| Dataset Splits | Yes | The training/validation/test splits of Ecoset-10 and Ecoset are 48K/859/1K, and 1.4M/28K/28K respectively. |
| Hardware Specification | Yes | We trained our models on compute clusters with Nvidia Ge Force 2080 Ti and V100 GPUs. Most of the Imagenet and Ecoset models were trained and evaluated on the V100s, while the CIFAR-10 and Ecoset-10 models were trained and evaluated on the 2080 Ti s. |
| Software Dependencies | Yes | We used Pytorch v1.11 and Python 3.9.12 to for our implementation. |
| Experiment Setup | Yes | During training, we use random horizontal flipping and padding + random cropping, as well as Auto Augment [41] for CIFAR-10 and Rand Augment for Ecoset and Imagenet. All Ecoset and Imagenet images were resized and cropped to 224 224. For CIFAR-10 we use a Wide-Resnet [42] model with 22 convolutional layers and a widening factor of 4, and for Ecoset and Imagenet we use XRes Net-18 from fastai [43] with a widening factor of 2. Table 3 presents the configurations used to train the models used in our evaluation. For all the models the SGD optimizer was used with Nesterov momentum=0.9. |