Laplace Redux - Effortless Bayesian Deep Learning

Authors: Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, Philipp Hennig

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work we show that these are misconceptions: we (i) review the range of variants of the LA including versions with minimal cost overhead; (ii) introduce laplace, an easy-to-use software library for Py Torch offering user-friendly access to all major flavors of the LA; and (iii) demonstrate through extensive experiments that the LA is competitive with more popular alternatives in terms of performance, while excelling in terms of computational cost.
Researcher Affiliation Collaboration Erik Daxberger ,c,m Agustinus Kristiadi ,t Alexander Immer ,e,p Runa Eschenhagen ,t Matthias Bauerd Philipp Hennigt,m c University of Cambridge m MPI for Intelligent Systems, Tübingen t University of Tübingen e Department of Computer Science, ETH Zurich p Max Planck ETH Center for Learning Systems d Deep Mind, London
Pseudocode No The paper includes code snippets labeled "Listing 1" and "Listing 2", but these are actual PyTorch code examples, not structured pseudocode or algorithm blocks as defined.
Open Source Code Yes laplace library: https://github.com/Alex Immer/Laplace
Open Datasets Yes We measured in- and out-of-distribution performance on standard image classification benchmarks (MNIST, Fashion MNIST, CIFAR-10) ... To this end, we use WILDS [68], a recently proposed benchmark of realistic distribution shifts...
Dataset Splits Yes Commonly, this is done through cross-validation, e.g. by maximizing the validation log-likelihood [23, 48]... laplace also supports standard cross-validation for hyperparameter tuning [23, 28], as shown in Listing 1.
Hardware Specification Yes We perform memory analysis on a single NVIDIA GeForce RTX 2080 Ti GPU.
Software Dependencies No We implement all models in Py Torch [59], using the Back PACK [21] library for computing the Hessians, and the asdfghjkl [60] library for the KFAC approximations. While software names are provided with citations, specific version numbers for these dependencies are not explicitly listed in the paper.
Experiment Setup Yes More details on the experimental setup are provided in Appendix C.3. ... Listing 1: Fit diagonal LA over all weights of a pre-trained classification model, do post-hoc tuning of the prior precision hyperparameter using cross-validation...