Predictive Uncertainty Estimation via Prior Networks

Authors: Andrey Malinin, Mark Gales

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on synthetic and MNIST and CIFAR-10 data show that unlike previous non-Bayesian methods PNs are able to distinguish between data and distributional uncertainty.
Researcher Affiliation Academia Andrey Malinin Department of Engineering University of Cambridge am969@cam.ac.uk Mark Gales Department of Engineering University of Cambridge mjfg@eng.cam.ac.uk
Pseudocode No No pseudocode or algorithm blocks are explicitly present in the paper.
Open Source Code Yes Code available at https://github.com/Kaos Engineer/Dirichlet Prior Networks
Open Datasets Yes An in-domain misclassification detection experiment and an out-of-distribution (OOD) input detection experiment were run on the MNIST and CIFAR-10 datasets [35, 36] to assess the DPN s ability to estimate uncertainty.
Dataset Splits No The misclassification detection experiment was run on the MNIST valid+test set and the CIFAR-10 test set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) used to replicate the experiment.
Experiment Setup No The experimental setup is described in Appendix A and additional experiments are described in Appendix B.