Training-Free Uncertainty Estimation for Dense Regression: Sensitivity as a Surrogate

Authors: Lu Mi, Hao Wang, Yonglong Tian, Hao He, Nir N Shavit10042-10050

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate our three proposed approaches in two representative real-world large-scale dense regression tasks, super single image resolution, and monocular depth estimation. ... We compare our methods with three training-required baselines. ... The evaluation results using metrics described above are shown in Table 1 and Table 2.
Researcher Affiliation Academia Lu Mi1, Hao Wang2, Yonglong Tian1, Hao He1, Nir N Shavit1 1 MIT CSAIL 2 Rutgers University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/lumi9587/train-free-uncertainty.
Open Datasets Yes We evaluate the model on NYU Depth Dataset V2. ... (Deng et al. 2009) ... Ledig et al. 2017)
Dataset Splits No The paper uses standard datasets and pre-trained models, but it does not provide specific train/validation/test dataset split percentages, sample counts, or explicit splitting methodology for reproducibility.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions 'Tensor Layer' and specific models but does not provide specific ancillary software details like library names with version numbers (e.g., Python 3.x, PyTorch x.x, CUDA x.x).
Experiment Setup Yes For infer-noise, the noise will be added to the feature maps of a certain layer. This noise is randomly sampled multiple times during inference to form a set of diverse outputs. For infer-dropout, random dropout is performed for multiple forwards to generate output samples, the variance of which is then used as uncertainty estimation. ... We choose 4 different locations for noise injection, including the layers right after the input and some intermediate layers (see details in the Supplement. For each experiment, we only add the noise into one layer with a specific σ value. Sample numbers of 8 and 32 are evaluated. ... For infer-dropout, we take the trained model and add a dropout layer with varied dropout rates. We choose the dropout rate ω from the set {0.01, 0.02, 0.05, 0.1, 0.2, 0.5} and use the same set of locations as the infer-noise. For each experiment, we only add the layer into one location with one specific dropout rate. Sample numbers of 8 and 32 are evaluated.