How Tempering Fixes Data Augmentation in Bayesian Neural Networks

Authors: Gregor Bachmann, Lorenzo Noci, Thomas Hofmann

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform exhaustive experiments to validate our theoretical insights. In particular, by relying on the framework of group convolutions, we design architectures for which the invariance with respect to certain augmentations is approximately built into the model. In turn, we observe a clear correlation between the degree of model invariance and optimal temperature.
Researcher Affiliation Academia 1Department of Computer Science, ETH Z urich, Z urich, Switzerland.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper states: 'For the SG-MCMC sampler, we adapted the code from Wenzel et al. (2020)2. For the implementation of the group equivariant layers, we used the code from Veeling et al. (2018)3'. This indicates they adapted existing code, but does not state they are releasing their own source code for the methodology presented in this paper.
Open Datasets Yes For the experiments with Res Net20 and G-Res Nets on CIFAR-10... Residual vs. Order Plot: Residuals of an untrained Res Net18 on Dogs vs Cats dataset.
Dataset Splits No The paper mentions training, but does not provide specific details on a validation split (e.g., percentages or counts) for reproducibility.
Hardware Specification Yes Finally, the experiments are executed on Nvidia DGX-1 GPU nodes equipped with 4 20-core Xeon E5-2698v4 processors, 512 GB of memory and 8 Nvidia V100 GPUs.
Software Dependencies No The paper states: 'For the SG-MCMC sampler, we adapted the code from Wenzel et al. (2020)2. For the implementation of the group equivariant layers, we used the code from Veeling et al. (2018)3'. It does not provide specific version numbers for any software dependencies like programming languages or libraries.
Experiment Setup Yes For the experiments with Res Net20 and G-Res Nets on CIFAR-10, we have the following hyperparameters: initial learning rate: 0.1 burn-in period: 150 epochs cycle length: 50 epochs total training time: 1500 epochs [...] The batch size is 128 across all experiments.