Bayesian Adversarial Learning

Authors: Nanyang Ye, Zhanxing Zhu

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed methods on two commonly used datasets, MNIST and CIFAR-10, and compare with various adversarial learning methods. As shown in Table 1, Bayesian adversarial training consistently achieves higher adversarial accuracy on both MNIST and CIFAR-10 on a variety of attacks.
Researcher Affiliation Collaboration Yingqian Li1, Xuanqing Liu1, Xingxing Zhang2, Zhuowen Tu1, Bo Li1. 1 University of California, San Diego. 2 Google AI
Pseudocode Yes Algorithm 1: Bayesian Adversarial Training (BAT)
Open Source Code No The paper does not provide an explicit statement about releasing the source code for the described methodology or a direct link to a code repository.
Open Datasets Yes We evaluate the proposed methods on two commonly used datasets, MNIST and CIFAR-10, and compare with various adversarial learning methods.
Dataset Splits Yes For MNIST, we use a 60K training set and 10K test set. For CIFAR-10, we use a 50K training set and 10K test set.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models or processor types) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library names with version numbers like Python 3.8, PyTorch 1.9) needed to replicate the experiment.
Experiment Setup Yes For MNIST, we use a LeNet-5 architecture [22] with 2 convolutional layers and 2 fully connected layers. The training is performed for 100 epochs using Adam optimizer with a learning rate of 0.001. For CIFAR-10, we use a Wide ResNet-28-10 architecture [32]. The training is performed for 200 epochs using Adam optimizer with a learning rate of 0.001.