Adaptive Conformal Inference by Betting

Authors: Aleksandr Podkopaev, Dong Xu, Kuang-Chih Lee

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive simulations with focus on adaptability to distribution shifts, we demonstrate the compelling empirical performance of the proposed methods.
Researcher Affiliation Industry 1Walmart Global Tech. Correspondence to: Aleksandr Podkopaev <sasha.podkopaev@walmart.com>.
Pseudocode Yes Algorithm 1 KT-based Adaptive Conformal Predictor.
Open Source Code Yes By providing open access to the code as a supplement for the purposes of transparency and reproducibility, our work aims to reach better understanding within the research community.
Open Datasets Yes Following Barber et al. (2023), we consider a changepoint setting setting where the data {(Xt, Yt)}n t=1 are generated according to a linear model: Yt = X t βt +εt, Xt N(0, I4), εt N(0, 1), t 1. and Next, we consider the dataset for forecasting the electricity demand in New South Wales (Harries, 1999).
Dataset Splits No The paper describes online learning scenarios and dynamic retraining. It mentions 'the first 25 weeks of data are used to train the initial model, followed by retraining at the end of each subsequent week' for stock prices, which is a form of temporal split for training, but it does not specify explicit fixed training/validation/test dataset splits for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions using 'Prophet' as a prediction model but does not specify any software names with version numbers or other programming language/library details required for reproducibility.
Experiment Setup Yes Throughout all experiments, we fix the target coverage level at 90% (α = 0.1). and For prediction, we first use a standard linear regression model whose coefficients are learned by optimizing the least squares objective on observed data prior to a given time step. and For each stock, the first 25 weeks of data are used to train the initial model, followed by retraining at the end of each subsequent week.