Conformal Prediction with Temporal Quantile Adjustments
Authors: Zhen Lin, Shubhendu Trivedi, Jimeng Sun
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate TQA s performance through extensive experimentation: TQA generally obtains efficient PIs and improves longitudinal coverage while preserving cross-sectional coverage. |
| Researcher Affiliation | Academia | Zhen Lin1 Shubhendu Trivedi2 Jimeng Sun1,3 1 Department of Computer Science, University of Illinois at Urbana-Champaign 2 Massachusetts Institute of Technology 3 Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign |
| Pseudocode | No | The paper describes algorithmic steps and equations (e.g., Eq. 9) but does not present them in a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our code is available at https://github.com/zlin7/TQA. |
| Open Datasets | Yes | Datasets We test our methods and baselines on the following datasets: Electronic health records data for white blood cell counts (WBCC) prediction (MIMIC [23, 18, 22]), COVID-19 cases prediction (COVID [10]), Electroencephalography trajectory prediction after visual stimuli (EEG [51]), energy load forecasting (GEFCom [20]), and healthcare claim amount prediction (CLAIM) using data from a large American healthcare data provider. |
| Dataset Splits | Yes | To construct a PI for Yi,t, we first split our data {Si}N i=1 into a proper training set and a calibration set [41]. Table 1: Number of TSs in each dataset along with the length. # train/cal/test 192/100/100 2393/500/500 200/100/80 300/100/200 1198/200/700 |
| Hardware Specification | No | The provided paper text does not contain specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'RNN as the base point estimator' and 'LSTM ([19])' and 'Adam [24]', but does not specify specific software library names with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x). |
| Experiment Setup | Yes | We use RNN as the base point estimator due to its flexibility and for comparison with [48]. We use α = 0.1, and a LSTM ([19]) similar to that in [48] (full implementation details in the Appendix). For TQA-E, we use γ = 0.005 following [17]. |