Shift Invariance Can Reduce Adversarial Robustness

Authors: Vasu Singla, Songwei Ge, Basri Ronen, David Jacobs

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that with real datasets and realistic architectures, shift invariance reduces adversarial robustness. Finally, we describe initial experiments using synthetic data to probe the source of this connection. In Section 5 we show experimentally that it can.
Researcher Affiliation Academia Vasu Singla, Songwei Ge Univeristy of Maryland {vsingla, songweig}@cs.umd.edu Ronen Basri Weizmann Institute of Science ronen.basri@weizmann.ac.il David Jacobs Univeristy of Maryland dwj@cs.umd.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code can be found at https://github.com/Songwei Ge/shift-invariance-adv-robustness.
Open Datasets Yes We consider two datasets in which FC networks are able to attain reasonable performance compared to CNNs, MNIST (Le Cun et al., 2010) and Fashion-MNIST (Xiao et al., 2017). We compare Res Nets with previously introduced FC networks on SVHN (Netzer et al., 2011). Finally we examine additional real architectures on CIFAR-10 and Image Net.
Dataset Splits No The paper does not explicitly specify validation dataset splits or mention the use of a validation set with specific proportions or methods.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions software components like "ADAM optimizer" and "SGD" and "Pytorch model zoo", but does not specify their version numbers.
Experiment Setup Yes All models were trained for 20 epochs using the ADAM optimizer, with a batch size of 200 and learning rate of 0.01. The learning rate is decreased by a factor of 10 at the 10th and 15th epoch. The models were trained for 100 epochs using SGD with 0.9 momentum and batch-size of 128. A learning rate of 0.1, with decay of factor 10 at the 50th and 75th epochs was used.