Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network

Authors: Xuanqing Liu, Yao Li, Chongruo Wu, Cho-Jui Hsieh

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results demonstrate that the proposed algorithm achieves state-of-the-art performance under strong attacks. On CIFAR-10 with VGG network, our model leads to 14% accuracy improvement compared with adversarial training (Madry et al., 2017) and random self-ensemble (Liu et al., 2017) under PGD attack with 0.035 distortion, and the gap becomes even larger on a subset of Image Net1. In this section, we test the performance of our robust Bayesian neural networks (Adv-BNN) with strong baselines on a wide variety of datasets.
Researcher Affiliation Academia Xuanqing Liu1, Yao Li2, , Chongruo Wu3, & Cho-Jui Hsieh1 1: Department of Computer Science, UCLA Los Angeles, CA 90095, UCLA {xqliu,choheish}@cs.ucla.edu 2: Department of Statistics, UC Davis 3: Department of Computer Science, UC Davis Davis, CA 95616, USA {crwu,yaoli}@ucdavis.edu
Pseudocode Yes The overall training algorithm is shown in Alg. 1. ... Algorithm 1 Code snippet for training Adv-BNN ... Based on Rand Layer, we can further implement variational Linear layer below in Alg. 2. ... Algorithm 2 Code snippet for implementing variational Linear layer
Open Source Code Yes Code for reproduction has been made available online at https://github.com/xuanqing94/ Bayesian Defense
Open Datasets Yes On CIFAR-10 with VGG network, our model leads to 14% accuracy improvement... We test the proposed Adv-BNN approach on CIFAR10, STL10 and Image Net143 datasets... 1) STL-10 (Coates et al., 2011), which has 5,000 training images and 8,000 testing images. Both of them are 96 96 pixels; 2) Image Net-143, which is a subset of Image Net (Deng et al., 2009)...
Dataset Splits No The paper discusses training/testing dataset is Dtr/te with size Ntr/te respectively and the training/testing phases. It talks about the test set extensively for evaluation. However, it does not explicitly mention a validation set or specific proportions for training/validation/test splits, nor does it provide citations for predefined splits that include validation.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions Py Torch in the context of code implementation examples (We take Py Torch as an example, the code snippet is shown in Alg. 1.), but it does not specify any version numbers for PyTorch or any other software dependencies, libraries, or solvers used in the experiments.
Experiment Setup Yes We list the key hyper-parameters in Tab. 2, note that we did not tune the hyper-parameters very hard, therefore it is entirely possible to find better ones. Table 2: Hyper-parameters setting in our experiments.