Stable Conformal Prediction Sets

Authors: Eugene Ndiaye

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide some numerical experiments to illustrate the tightness of our estimation when the sample size is sufficiently large, on both synthetic and real datasets.
Researcher Affiliation Academia 1H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, USA. Correspondence to: Eugene Ndiaye <endiaye3@gatech.edu>.
Pseudocode Yes Algorithm 1 Stable Conformal Prediction Set
Open Source Code Yes A python package with our implementation is available at https://github.com/EugeneNdiaye/stable_conformal_prediction where additional numerical experiments (e.g., using large pre-trained neural net) and benchmarks will be provided.
Open Datasets Yes Benchmarking conformal sets for the least absolute deviation regression models with a ridge regularization on real datasets. (a) Boston (506, 13) (b) Diabetes (442, 10) (c) Housingcalifornia (20640, 8) (d) Friedman1 (500, 100)
Dataset Splits No The paper describes the 'split CP' baseline which involves data splitting, stating 'Let us define the training set Dtr = {(x1, y1), , (xm, ym)} with m < n , the calibration set Dcal = {(xm+1, ym+1), , (xn, yn)}.' However, it does not provide specific split percentages or a detailed splitting methodology for its own experimental setup, beyond mentioning '100 random permutation of the data' for evaluation.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions 'A python package' for implementation and refers to 'sklearn synthetic dataset make regression' in Figure 2, but it does not provide specific version numbers for Python or any libraries used, such as scikit-learn or PyTorch.
Experiment Setup Yes We conduct all the experiments with a coverage level of 0.9 i.e., α = 0.1. We fixed ˆz = 0 and λ = 0.5. We used a default value of ϵr = 10 4. The parameter of the model is obtained after T = n/10 iterations of stochastic gradient descent. For stab CP, we use a stability bound estimate τi = T xi /(n+1).