Uncertainty Regularized Evidential Regression
Authors: Kai Ye, Tiejin Chen, Hua Wei, Liang Zhan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments substantiate our theoretical findings and demonstrate the effectiveness of the proposed solution. |
| Researcher Affiliation | Academia | 1University of Pittsburgh, Pittsburgh, PA, 15260, USA 2Arizona State University, Tempe, AZ, 85281, USA |
| Pseudocode | No | The paper describes its methods mathematically and textually but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code is at https://github.com/FlynnYe/UR-ERN |
| Open Datasets | Yes | Following (Amini et al. 2020), we train models on y = x3 + ϵ, where ϵ N(0, 3). We conduct training over the interval x [ 4, 4], and perform testing over x [ 6, 4) (4, 6]." and "We choose the NYU Depth v2 dataset (Silberman et al. 2012) for experiments. |
| Dataset Splits | No | The paper mentions training and testing intervals for the cubic regression dataset, but it does not explicitly provide training/validation/test dataset splits (e.g., percentages, sample counts, or cross-validation details) for reproducibility, nor does it specify how validation data was used. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9') that would be necessary to replicate the experiment environment. |
| Experiment Setup | Yes | For experiments within HUA, we initialize the model within HUA by setting bias in the activation layer." and "Please refer to Appendix B for details about experimental setups and experiments about the sensitivity of hyperparameters. |