Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning
Authors: Veit David Wild, Robert Hu, Dino Sejdinovic
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed method obtains state-of-the-art performance on several benchmark datasets. |
| Researcher Affiliation | Collaboration | Veit D. Wild Department of Statistics University of Oxford veit.wild@stats.ox.ac.uk Robert Hu Amazon robyhu@amazon.co.uk Dino Sejdinovic School of Computer and Mathematical Sciences University of Adelaide dino.sejdinovic@adelaide.edu.au |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | Codebase: https://github.com/Mr Huff/GWI |
| Open Datasets | Yes | UCI Regression... Fashion MNIST [Xiao et al., 2017] and CIFAR-10 [Krizhevsky et al., 2009] |
| Dataset Splits | No | The paper mentions 'We train on random 90% of the data and predict on 10%' for training and testing, but does not explicitly provide details for a separate validation split in the main text. |
| Hardware Specification | Yes | All experiments were performed on a single NVIDIA GeForce RTX 3090 GPU with 24GB of memory. |
| Software Dependencies | No | The paper mentions using 'deepobs library [Schneider et al., 2019]' but does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | For the UCI experiments, we used a single hidden layer MLP with 50 units and ReLU activations. We used a batch size of 128 and trained for 200 epochs using the Adam optimizer with a learning rate of 0.001. |