Repulsive Deep Ensembles are Bayesian
Authors: Francesco D'Angelo, Vincent Fortuin
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study repulsive terms in weight and function space and empirically compare their performance to standard ensembles and Bayesian baselines on synthetic and real-world prediction tasks. |
| Researcher Affiliation | Academia | Francesco D Angelo ETH Zürich Zürich, Switzerland dngfra@gmail.com Vincent Fortuin ETH Zürich Zürich, Switzerland fortuin@inf.ethz.ch |
| Pseudocode | No | No explicit pseudocode or algorithm block found. |
| Open Source Code | No | No explicit statement or link for open-source code for the methodology. |
| Open Datasets | Yes | Fashion MNIST dataset [83] for training and the MNIST dataset [43] as an out-of-distribution (OOD) task. ... CIFAR-10 [41] with the SVHN dataset [61] as OOD data. |
| Dataset Splits | No | The paper mentions using datasets for training and OOD tasks but does not specify exact train/validation/test splits (e.g., percentages or counts) or reference standard splits. |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, memory, or specific computer configurations) are mentioned for running experiments. |
| Software Dependencies | No | No specific software dependencies (e.g., library or solver names with version numbers) are mentioned. |
| Experiment Setup | No | The paper mentions using an RBF kernel with a median heuristic for bandwidth selection and an adaptive bandwidth. However, it does not provide specific hyperparameter values like learning rate, batch size, number of epochs, or optimizer settings for the neural network training. |