Improving Bayesian Neural Networks by Adversarial Sampling
Authors: Jiaru Zhang, Yang Hua, Tao Song, Hao Wang, Zhengui Xue, Ruhui Ma, Haibing Guan10110-10117
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments with multiple network structures on different datasets, e.g., CIFAR-10 and CIFAR-100. Experimental results validate the correctness of the theoretical analysis and the effectiveness of the Adversarial Sampling on improving model performance. |
| Researcher Affiliation | Academia | 1 Shanghai Jiao Tong University 2 Queen s University Belfast 3 Louisiana State University |
| Pseudocode | Yes | Algorithm 1: Training with Adversarial Sampling |
| Open Source Code | Yes | We release our codes at https://github.com/AISIGSJTU/AS. |
| Open Datasets | Yes | We train a variety of Bayesian neural networks, including Res Net20, Res Net56 (He et al. 2016), and VGG (Simonyan and Zisserman 2015), on CIFAR-10, and CIFAR-100 datasets (Krizhevsky 2009). |
| Dataset Splits | No | The paper mentions training on CIFAR-10 and CIFAR-100 datasets and evaluating on test sets, but does not explicitly provide details about training/validation/test splits (e.g., percentages, sample counts, or specific cross-validation setup). |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9'). |
| Experiment Setup | Yes | For simplicity, we set the hyperparameter α = 0.02 and N = 5 on models trained with Adversarial Sampling. |