Robust Conformal Prediction Using Privileged Information

Authors: Shai Feldman, Yaniv Romano

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical experiments on both real and synthetic datasets indicate that our approach achieves a valid coverage rate and constructs more informative predictions compared to existing methods, which are not supported by theoretical guarantees.
Researcher Affiliation Academia Shai Feldman Department of Computer Science Technion, Israel shai.feldman@cs.technion.ac.il Yaniv Romano Departments of Electrical and Computer Engineering and of Computer Science Technion, Israel yromano@cs.technion.ac.il
Pseudocode Yes Algorithm 1: Privileged Conformal Prediction (PCP) ... Algorithm 2: Two-Staged Conformal Prediction (Two-Staged) ... Algorithm 3: Efficient Privileged Conformal Prediction (PCP) ... Algorithm 4: Privileged Conformal Prediction for scarce data (LOO-PCP)
Open Source Code Yes Software implementing the proposed method and reproducing our experiments is available at https://github.com/Shai128/pcp.
Open Datasets Yes We test the applicability of our method on the semi-synthetic Infant Health and Development Program (IHDP) dataset [35]... We study the performance of PCP and compare it to baselines in a missing response setting using six real datasets: Facebook1,2 [36], Bio [37], House [38], Meps19 [39] and Blog [40]... CIFAR-10N [41]... CIFAR-10 [42]... The Twins dataset [47]... The National Study of Learning Mindsets (NSLM) dataset [49]... CIFAR-10C [50].
Dataset Splits Yes In all experiments, we randomly split the data into training, validation, calibration, and test sets. We fit a base learning model on the training data and use the validation set to avoid overfitting. ... we split the data into a training set (50%), calibration (20%), validation set (10%) used for early stopping, and a test set (20%) to evaluate performance. See Section D.2 for the specific details in the scrace data experiments. ... In this experiment, we split the data into a training set (30%), a validation set (10%), and a test set (60%).
Hardware Specification Yes The resources used for the experiments are: CPU: Intel(R) Xeon(R) E5-2650 v4. GPU: Nvidia titanx, 1080ti, 2080ti. OS: Ubuntu 18.04.
Software Dependencies No The paper mentions using 'xgboost package [53]', 'scikit-learn package [54]', and 'pytorch package [55]' for implementation, but does not provide specific version numbers for these software components.
Experiment Setup Yes In regression tasks, the model is trained to learn the 5% and 95% conditional quantiles of Y | X. In Table 2 we summarize the model we used for each dataset for both tasks. For neural network models, we used an Adam optimizer [52] with 1e-4 learning rate, and batch size of 128. The network is composed of hidden layers of sizes: 32, 64, 64, 32, 0.1 dropout, and leaky relu as an activation function. For xgboost and random forest models, we used 100 estimators. We train the networks for 1000 epochs, but stop the training earlier if the validation loss does not improve for 200 epochs, and in this case, the model with the lowest validation loss is chosen.