Training on Foveated Images Improves Robustness to Adversarial Attacks
Authors: Muhammad Shah, Aqsa Kashaf, Bhiksha Raj
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that compared to DNNs trained on the original images, DNNs trained on images transformed by R-Blur are substantially more robust to adversarial attacks, as well as other, non-adversarial, corruptions, achieving up to 25% higher accuracy on perturbed data. |
| Researcher Affiliation | Collaboration | Muhammad A. Shah Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 mshah1@cmu.edu; Aqsa Kashaf Byte Dance San Jose, CA 95110 akashaf@cmu.edu; Bhiksha Raj Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 bhiksha@cs.cmu.edu |
| Pseudocode | No | The paper describes the operations of R-Blur verbally and through figures (e.g., Figure 1) and mathematical equations, but does not provide a formal pseudocode or algorithm block. |
| Open Source Code | Yes | The code for R-Blur is available at https://github.com/ahmedshah1494/RBlur |
| Open Datasets | Yes | Datasets: We use natural image datasets, namely CIFAR-10 [35], Imagenet ILSVRC 2012 [37], Ecoset [36] and a 10-class subset of Ecoset (Ecoset-10). |
| Dataset Splits | Yes | The training/validation/test splits of Ecoset-10 and Ecoset are 48K/859/1K, and 1.4M/28K/28K respectively. |
| Hardware Specification | Yes | We trained our models on compute clusters with Nvidia Ge Force 2080 Ti and V100 GPUs. Most of the Imagenet and Ecoset models were trained and evaluated on the V100s, while the CIFAR-10 and Ecoset-10 models were trained and evaluated on the 2080 Ti s. |
| Software Dependencies | Yes | We used Pytorch v1.11 and Python 3.9.12 to for our implementation. |
| Experiment Setup | Yes | During training, we use random horizontal flipping and padding + random cropping, as well as Auto Augment [41] for CIFAR-10 and Rand Augment for Ecoset and Imagenet. All Ecoset and Imagenet images were resized and cropped to 224 224. For CIFAR-10 we use a Wide-Resnet [42] model with 22 convolutional layers and a widening factor of 4, and for Ecoset and Imagenet we use XRes Net-18 from fastai [43] with a widening factor of 2. Table 3 presents the configurations used to train the models used in our evaluation. For all the models the SGD optimizer was used with Nesterov momentum=0.9. |