The Unreasonable Effectiveness of Deep Evidential Regression

Authors: Nis Meinert, Jakob Gawlikowski, Alexander Lavin

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We detail the theoretical shortcomings and analyze the performance on synthetic and realworld data sets, showing that Deep Evidential Regression is a heuristic rather than an exact uncertainty quantification. We go on to discuss corrections and redefinitions of how aleatoric and epistemic uncertainties should be extracted from NNs.
Researcher Affiliation Collaboration 1Pasteur Labs, 19 Morris Avenue, Brooklyn Navy Yard Building 128, Brooklyn, NY 11205, USA 2German Aerospace Center, Institute of Data Science, M alzerstraße 3-5, 07745 Jena, Germany
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes An open-source Git Hub repository (MIT License) with the Appendix and source code for reproducing our experiments (implementations of the NNs and algorithms to generate the data) is available on https://github.com/pasteurlabs/unreasonable effective der.
Open Datasets Yes The training set consists of over 27k RGB-to-depth image pairs of indoor scenes from the NYU Depth v2 dataset (Silberman et al. 2012).
Dataset Splits No The paper mentions training data and testing, but does not provide specific details on validation dataset splits (e.g., percentages or exact counts for a validation set).
Hardware Specification Yes The model and experiments are lightweight, running locally on a 4-core Mac Book Pro in under an hour.
Software Dependencies No The paper mentions using 'scripts' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The NN is trained with λ = 0.01 and the Adam optimizer with an initial learning rate of 5 10 4 for 500 epochs.