Deep Evidential Regression
Authors: Alexander Amini, Wilko Schwarting, Ava Soleimany, Daniela Rus
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate learning well-calibrated measures of uncertainty on various benchmarks, scaling to complex computer vision tasks, as well as robustness to adversarial and OOD test samples. We first qualitatively compare the performance of our approach against a set of baselines on a onedimensional cubic regression dataset (Fig. 3). Additionally, we compare our approach to baseline methods for NN predictive uncertainty estimation on real world datasets used in [20, 28, 9]. |
| Researcher Affiliation | Academia | Alexander Amini1, Wilko Schwarting1, Ava Soleimany2, Daniela Rus1 1 Computer Science and Artificial Intelligence Lab (CSAIL), Massachusetts Institute of Technology (MIT) 2 Harvard Graduate Program in Biophysics |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper mentions 'open-source code' in its keywords, but does not provide a specific repository link or an explicit statement about the availability of the code for the described methodology. |
| Open Datasets | Yes | Our training data consists of over 27k RGB-to-depth, H W, image pairs of indoor scenes (e.g. kitchen, bedroom, etc.) from the NYU Depth v2 dataset [35]. |
| Dataset Splits | No | The paper states, 'Training details for Table 1 are available in Sec. S2.2.' and 'Full dataset, model, training, and performance details for depth models are available in Sec. S3.', but does not provide specific training/validation/test dataset splits or their sizes in the main text. |
| Hardware Specification | Yes | We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Volta V100 GPU used for this research. |
| Software Dependencies | No | The paper does not provide specific version numbers for its software dependencies or libraries. |
| Experiment Setup | Yes | The total loss, Li(w), consists of the two loss terms for maximizing and regularizing evidence, scaled by a regularization coefficient, λ... We enforce the constraints on (υ, α, β) with a softplus activation (and additional +1 added to α since α > 1). Linear activation is used for γ R. |