Defending Against Physically Realizable Attacks on Image Classification
Authors: Tong Wu, Liang Tong, Yevgeniy Vorobeychik
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our first contribution is an empirical evaluation of the effectiveness of conventional approaches to robust ML against two physically realizable attacks... Our second contribution is a novel abstract attack model... We then experimentally demonstrate that our proposed approach is significantly more robust against physical attacks on deep neural networks than adversarial training and randomized smoothing methods that leverage lp-based attack models. |
| Researcher Affiliation | Academia | Tong Wu, Liang Tong & Yevgeniy Vorobeychik Department of Computer Science and Engineering Washington University in St. Louis {tongwu, liangtong, yvorobeychik}@wustl.edu |
| Pseudocode | Yes | Algorithm 1 presents the full algorithm for identifying the ROA position, which amounts to exhaustive search through the image pixel region. ... Algorithm 2 presents the full algorithm is provided as Algorithm 2. |
| Open Source Code | Yes | 1The code can be found at https://github.com/tongwu2020/phattacks |
| Open Datasets | Yes | We applied white-box dodging (untargeted) attacks on the face recognition systems (FRS) from Sharif et al. (2016). We used both the VGGFace data and transferred VGGFace CNN model for the face recognition task... Following Eykholt et al. (2018), we use the LISA traffic sign dataset for our experiments... |
| Dataset Splits | Yes | We used the standard corp-and-resize method to process the data to be 224 224 pixels, and split the dataset into training, validation, and test according to a 7:2:1 ratio for each subject. In total, the data set has 3178 images in the training set, 922 images in the validation set, and 470 images in the test set. |
| Hardware Specification | No | The paper mentions training models and using a CNN but does not specify any hardware details such as CPU/GPU models, memory, or cloud computing instance types. |
| Software Dependencies | No | The paper mentions 'Pytorch implementation' and 'Pytorch built-in Adam Optimizer', but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We used the learning rate of ϵ/4 for the former and 1 for the latter. In all cases, pixels are in 0 255 range and retraining was performed for 30 epochs using the ADAM optimizer. ... We set the batch size to be 64 and use Pytorch built-in Adam Optimizer with an initial learning rate of 10-4 and default parameters in Pytorch. ... We used {30, 50} iterations of PGD with ϵ = 255/2 to generate adversarial noise inside the rectangle, and with learning rate α = {8, 4} correspondingly. ... DOA adversarial training is performed for 5 epochs with a learning rate of 0.0001. |