Posterior Network: Uncertainty Estimation without OOD Samples via Density-Based Pseudo-Counts

Authors: Bertrand Charpentier, Daniel Zügner, Stephan Günnemann

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we compare our model to previous methods on a rich set of experiments. (Section 4, first sentence) and Results. Results for the Sensorles Drive dataset are shown in Tab. 1. Tables for other datasets are in the appendix. (Section 4)
Researcher Affiliation Academia Bertrand Charpentier, Daniel Zügner, Stephan Günnemann Technical University of Munich, Germany {charpent, zuegnerd, guennemann}@in.tum.de
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code and further supplementary material is available online (www.daml.in.tum.de/postnet). (Section 4)
Open Datasets Yes We evaluate on the following real-world datasets: Segment [6], Sensorless Drive [6], MNIST [22] and CIFAR10 [19]. (Section 4, Datasets)
Dataset Splits Yes Moreover, for all experiments, we split the data into train, validation and test set (60%, 20%, 20%) and train/evaluate all models on 5 different splits. (Section 4, Baselines)
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions software like PyTorch via citation but does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup Yes The weight of the entropy regularization is a hyperparameter; experiments have shown Post Net to be fairly insensitive to it, so in our experiments we set it to 10 5. (Section 3, end)