Sequential Predictive Conformal Inference for Time Series
Authors: Chen Xu, Yao Xie
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using simulation and real-data experiments, we demonstrate a significant reduction in interval width of SPCI compared to other existing methods under the desired empirical coverage.Experimentally, we demonstrate competitive and/or improved empirical performance against baseline CP methods on sequential data. |
| Researcher Affiliation | Academia | 1H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA. Correspondence to: Yao Xie <yao.xie@isye.gatech.edu>. |
| Pseudocode | Yes | Algorithm 1 Sequential Predictive Conformal Inference (SPCI)Algorithm 2 SPCI for exchangeable data (based on split conformal)Algorithm 3 Multi-step SPCI (based on LOO prediction in Enb PI (Xu & Xie, 2021b)) |
| Open Source Code | Yes | Official implementation can be found at https://github.com/hamrel-cxu/SPCI-code. |
| Open Datasets | Yes | The first dataset is the wind speed data (m/s) at wind farms operated by the Midcontinent Independent System Operator (MISO) in the US (Zhu et al., 2021). The second dataset contains solar radiation information1 in Atlanta downtown, which is measured in Diffuse Horizontal Irradiance (DHI). The full dataset contains a yearly record in 2018 and is updated every 30 minutes. 1Collected from National Solar Radiation Database (NSRDB): https://nsrdb.nrel.gov/. The last dataset tracks electricity usage and pricing (Harries et al., 1999) in the states of New South Wales and Victoria in Australia, with an update frequency of 30 minutes over a 2.5-year period in 1996 1999.Specifically, the dataset is publicly available on Kaggle https://www.kaggle.com/datasets/paultimothymooney/stock-market-data |
| Dataset Splits | Yes | We fix α = 0.1 and use the first 80% (resp. rest 20%) data for training (resp. testing). |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | In our experiments, we use the Python implementation of QRF by (Roebroek, 2022).Roebroek, J. Sklearn-quantile, 2022. URL https://github.com/jasperroebroek/sklearn-quantile. |
| Experiment Setup | Yes | We fix α = 0.1 and use the first 80% (resp. rest 20%) data for training (resp. testing). For SPCI and Enb PI, we use the random forest regression model with 25 bootstrap models. |