How Sampling Impacts the Robustness of Stochastic Neural Networks

Authors: Sina Däubener, Asja Fischer

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we conduct an empirical analysis that demonstrates that the novel theoretical insights perfectly match what we observe in practice. In this section we empirically demonstrate that the findings of our theoretical analysis are transferable to SNNs and help to explain the mechanisms leveraging the experimentally observed robustness of previously proposed SNNs. Our experiments are conducted on two different image datasets: Fashion MNIST [Xiao et al., 2017] and CIFAR10 [Krizhevsky et al.].
Researcher Affiliation Academia Sina Däubener and Asja Fischer Department of Computer Science Ruhr University Bochum, Germany {sina.daeubener, asja.fischer}@rub.de
Pseudocode No The paper contains mathematical derivations and theoretical explanations but does not include any explicit pseudocode blocks or algorithm listings.
Open Source Code Yes We extended the description of datasets, models and training procedure from section 5 in supplement B and provided the code used in the main paper in the supplemental material.
Open Datasets Yes Our experiments are conducted on two different image datasets: Fashion MNIST [Xiao et al., 2017] and CIFAR10 [Krizhevsky et al.].
Dataset Splits No The paper does not explicitly provide training, validation, and test dataset splits with percentages, counts, or specific methodology in the main text. It refers to supplementary material for more details, which is not provided.
Hardware Specification Yes All experiments were run on a single NVIDIA Ge Force RTX 2080 Ti.
Software Dependencies No The paper mentions using the 'Clever Hans repository' and 'Python packages' but does not provide specific version numbers for any software dependencies like Python, PyTorch, or other libraries.
Experiment Setup Yes If not specified otherwise we used 100 samples of p(Θ) for inference on all datasets and calculated adversarial attacks with the fast gradient (sign) method (FGM) [Goodfellow et al., 2015], the cross-entropy loss for IMs, BNNs, and Res Nets, the margin loss Lmargin specified in section 3 for SINs, and the ℓ2-norm constraint based on the Clever Hans repository [Papernot et al., 2018]. For experiments on CIFAR10 we trained two wide residual networks (Res Net) with MC dropout layers [Gal and Ghahramani, 2016] applied after the convolution blocks and dropout probabilities p = 0.3 and p = 0.6.