Robustness of Bayesian Neural Networks to Gradient-Based Attacks
Authors: Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane', Luca Bortolussi, Guido Sanguinetti
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the MNIST and Fashion MNIST datasets, representing the finite data regime, with BNNs trained with Hamiltonian Monte Carlo and Variational Inference support this line of argument, showing that BNNs can display both high accuracy and robustness to gradient based adversarial attacks. |
| Researcher Affiliation | Academia | Ginevra Carbone* Department of Mathematics and Geosciences University of Trieste, Trieste, Italy ginevra.carbone@phd.units.it Matthew Wicker* Departement of Computer Science University of Oxford, Oxford, United Kingdom matthew.wicker@wolfson.ox.ac.uk Luca Laurenti Departement of Computer Science, University of Oxford, Oxford, United Kingdom luca.laurenti@cs.ox.ac.uk Andrea Patane Departement of Computer Science, University of Oxford, Oxford, United Kingdom patane.andre@gmail.com Luca Bortolussi Department of Mathematics and Geosciences University of Trieste, Trieste, Italy; Modeling and Simulation Group, Saarland University, Saarland, Germany luca.bortolussi@gmail.com Guido Sanguinetti School of Informatics, University of Edinburgh, Edinburgh, United Kingdom; SISSA, Trieste, Italy gsanguin@inf.ed.ac.uk |
| Pseudocode | No | The paper describes methods and theoretical proofs but does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code for the experiments can be found at: https://github.com/ginevracoal/robust BNNs. |
| Open Datasets | Yes | We train a variety of BNNs on the MNIST and Fashion MNIST [Xiao et al., 2017] datasets, and evaluate their posterior distributions using HMC and VI approximate inference methods. |
| Dataset Splits | No | The paper mentions using MNIST and Fashion MNIST datasets and evaluating on 'test images', but it does not provide specific train/validation/test split percentages or sample counts to reproduce the data partitioning. |
| Hardware Specification | No | The paper discusses training models and conducting experiments but does not specify any hardware components like GPU models, CPU types, or memory details used for the computations. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the experiments. |
| Experiment Setup | Yes | Specifically, we train a two hidden layers BNN (with 1024 neurons per layer for a total of about 1.8 million parameters) with HMC and a three hidden layers BNN (512 neurons per layer) with VI. |