Provably robust classification of adversarial examples with detection

Authors: Fatemeh Sheikholeslami, Ali Lotfi, J Zico Kolter

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Specifically, tests on MNIST and CIFAR-10 datasets exhibit promising results, for example with provable robust error less than 63.63% and 67.92%, for 55.6% and 66.37% natural error, for ϵ = 8/255 and 16/255 on the CIFAR-10 dataset, respectively. and Empirical performance of the proposed robust classification with detection on MNIST-10 and CIFAR-10 datasets is reported in this section, and is compared with the state-of-the-art alternatives.
Researcher Affiliation Collaboration Fatemeh Sheikholeslami Bosch Center for Artificial Intelligence Pittsburgh, PA fatemeh.sheikholeslami@us.bosch.com Ali LotfiRezaabad The University of Texas at Austin Austin, TX alotfi@utexas.edu J. Zico Kolter Bosch Center for Artificial Intelligence Carnegie Mellon University Pittsburgh, PA zkolter@cs.cmu.edu
Pseudocode Yes Algorithm 1 Solution for Ji(x, y) in Theorem 2
Open Source Code Yes Code is available at https://github.com/boschresearch/robust_classification_ with_detection
Open Datasets Yes Empirical performance of the proposed robust classification with detection on MNIST-10 and CIFAR-10 datasets is reported in this section, and is compared with the state-of-the-art alternatives.
Dataset Splits No The paper describes training procedures (epochs, batch size, warm-up periods) but does not explicitly provide validation set splits (e.g., percentages or counts) or refer to standard validation splits with citations. It refers to training data for normalization but not explicit validation splits. For example, For MNIST, the network is trained in 100 epochs with batchsize of 100 (total of 60K steps). A warm up period of 3 epochs (2K steps) is used (normal classification training with no robust loss), followed up by a ramp-up duration of 18 epochs (10K steps), and the learning rate is decayed 10 at epochs 25 and 42.
Hardware Specification Yes networks are trained with a single NVIDIA Tesla V100S GPU.
Software Dependencies No The paper mentions 'Adam optimizer' but does not specify any software libraries or frameworks with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, Python 3.x).
Experiment Setup Yes For MNIST, the network is trained in 100 epochs with batchsize of 100 (total of 60K steps). A warm up period of 3 epochs (2K steps) is used (normal classification training with no robust loss), followed up by a ramp-up duration of 18 epochs (10K steps), and the learning rate is decayed 10 at epochs 25 and 42. ... For CIFAR10, the network is trained in 3200 epochs with batchsize of 1600 (total of 100K steps). A warm up period of 320 epochs (10K steps)... and Adam optimizer with learning rate of 5 10 4 is used. and κ is scheduled by a linear ramp-down process, starting at 1, which after a warm-up perio,d is ramped down to value κend = 0.5. and ϵ during the training is also simultaneously scheduled by a linear ramp-up, starting at 0, and ramped up to the final value of ϵtrain.