Reliable training and estimation of variance networks

Authors: Nicki Skafte, Martin Jørgensen, Søren Hauberg

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we investigate the impact of predictive uncertainty on multiple datasets and tasks ranging from regression, active learning and generative modeling.
Researcher Affiliation Academia Nicki S. Detlefsen nsde@dtu.dk Martin Jørgensen* marjor@dtu.dk Søren Hauberg sohau@dtu.dk *Equal contribution Section for Cognitive Systems, Technical University of Denmark
Pseudocode Yes Pseudo-code of this sampling-scheme, can be found in the supplementary material.
Open Source Code Yes Implementation details and code can be found in the supplementary material.
Open Datasets Yes More precisely, we consider weather data from over 130 years.7 Each day the maximum temperature is measured... 7https://mrcc.illinois.edu/CLIMATE/Station/Daily/Stn Dy BTD2.jsp ...we experimented with four UCI regression datasets (Fig. 5). ... For our last set of experiments we fitted a standard VAE and our Comb-VAE to four datasets: MNIST, Fashion MNIST, CIFAR10, SVHN.
Dataset Splits No The paper specifies a '20% train, 60% pool and 20% test' split for active learning, but does not explicitly mention a separate validation set split in the main text.
Hardware Specification No The paper states: 'We gratefully acknowledge the support of NVIDIA Corporation with the donation of GPU hardware used for this research.' but does not specify the model or type of GPU.
Software Dependencies No The paper mentions 'Implementation details and code can be found in the supplementary material,' but does not provide specific software names with version numbers in the main text.
Experiment Setup No The details about network architecture and training can be found in the supplementary material.