Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Repulsive Deep Ensembles are Bayesian

Authors: Francesco D'Angelo, Vincent Fortuin

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study repulsive terms in weight and function space and empirically compare their performance to standard ensembles and Bayesian baselines on synthetic and real-world prediction tasks.
Researcher Affiliation Academia Francesco D Angelo ETH Zürich Zürich, Switzerland EMAIL Vincent Fortuin ETH Zürich Zürich, Switzerland EMAIL
Pseudocode No No explicit pseudocode or algorithm block found.
Open Source Code No No explicit statement or link for open-source code for the methodology.
Open Datasets Yes Fashion MNIST dataset [83] for training and the MNIST dataset [43] as an out-of-distribution (OOD) task. ... CIFAR-10 [41] with the SVHN dataset [61] as OOD data.
Dataset Splits No The paper mentions using datasets for training and OOD tasks but does not specify exact train/validation/test splits (e.g., percentages or counts) or reference standard splits.
Hardware Specification No No specific hardware details (GPU/CPU models, memory, or specific computer configurations) are mentioned for running experiments.
Software Dependencies No No specific software dependencies (e.g., library or solver names with version numbers) are mentioned.
Experiment Setup No The paper mentions using an RBF kernel with a median heuristic for bandwidth selection and an adaptive bandwidth. However, it does not provide specific hyperparameter values like learning rate, batch size, number of epochs, or optimizer settings for the neural network training.