Statistical Guarantees for the Robustness of Bayesian Neural Networks
Authors: Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide experimental comparison for several approximate BNN inference techniques on image classification tasks associated to MNIST and a two-class subset of the GTSRB dataset. |
| Researcher Affiliation | Academia | 1 University of Oxford 2Royal Holloway University of London |
| Pseudocode | Yes | Algorithm 1 BNN robustness estimation |
| Open Source Code | Yes | 1Code is available at https://github.com/matthewwicker/Statistical Guarentees For BNNs |
| Open Datasets | Yes | MNIST handwritten digits dataset [Le Cun and Cortes, 2010] and a two-class subset of the the German Traffic Sign Recognition Benchmark (GTSRB) [Stallkamp et al., 2012] |
| Dataset Splits | No | The paper does not provide specific training/validation/test dataset splits (e.g., percentages or sample counts). It mentions training on MNIST and GTSRB and using the MNIST test dataset, but no explicit split details. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Edward Python package [Tran et al., 2016]' but does not specify version numbers for Edward or Python, or any other software dependencies with versions. |
| Experiment Setup | Yes | We used an update step size of 0.01 and the numerical integrator was given 5 update steps per sample. The Gaussian priors on the convolutional filters were set to have mean 1 and variance 0.01 and the Gaussian priors of the fully connected layer were centred at 0 with variance 1. For the network on GTSRB we train using a batch size of 128, and the Adam optimizer with a 0.001 learning rate over 15000 iterations. For MNIST we trained for 5000 iterations with a higher learning rate of 0.01. This resulted in the 0.5 dropout rate in the BNN used for MNIST, and 0.25 and 0.5 respectively for the two layers that make up the CNN used for GTSRB. |