Understanding the Under-Coverage Bias in Uncertainty Estimation
Authors: Yu Bai, Song Mei, Huan Wang, Caiming Xiong
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on simulated and real data verify our theory and further illustrate the effect of various factors such as sample size and model capacity on the under-coverage bias in more practical setups. |
| Researcher Affiliation | Collaboration | Yu Bai Salesforce Research yu.bai@salesforce.com Song Mei UC Berkeley songmei@berkeley.edu Huan Wang Salesforce Research huan.wang@salesforce.com Caiming Xiong Salesforce Research cxiong@salesforce.com |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statements about releasing source code or provide links to a code repository for the methodology described. |
| Open Datasets | Yes | We take six real-world regression datasets: community and crimes (Community) [2], bike sharing (Bike) [1], Tennessee s student teacher achievement ratio (STAR) [6], as well as the medical expenditure survey number 19 (MEPS_19) [3], number 20 (MEPS_20) [4], and number 21 (MEPS_21) [5]. |
| Dataset Splits | Yes | For each setting, we average over 8 random seeds where each seed determines the train-validation split, model initialization, and SGD batching. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU, CPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using momentum SGD, but does not provide specific software names with version numbers (e.g., Python 3.x, TensorFlow 2.x, PyTorch 1.x) or specific solver versions. |
| Experiment Setup | Yes | We minimize the α-quantile loss (3) via momentum SGD with batch size 64. |