Fast Online Value-Maximizing Prediction Sets with Conformal Cost Control
Authors: Zhen Lin, Shubhendu Trivedi, Cao Xiao, Jimeng Sun
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our methodological and theoretical contributions are supported by experiments on several healthcare tasks and synthetic datasets Fav Mac furnishes higher value compared with several variants and baselines while maintaining strict cost control. 6. Experimental Evaluation |
| Researcher Affiliation | Collaboration | Zhen Lin 1 Shubhendu Trivedi 2 Cao Xiao 3 Jimeng Sun 1 4 1Department of Computer Science, University of Illinois at Urbana-Champaign, USA 2shubhendu@csail.mit.edu 3Relativity 4Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, USA. Correspondence to: Zhen Lin <zhenlin4@illinois.edu>, Jimeng Sun <jimeng@illinois.edu>. |
| Pseudocode | Yes | Algorithm 1 Quantile Tree (short version) and Algorithm 2 Expected Cost Control (online) |
| Open Source Code | Yes | Our code is available at https://github.com/zlin7/Fav Mac |
| Open Datasets | Yes | MNIST: A synthetic dataset created by superimposing MNIST (Lecun et al., 1998) images... MIMIC: The MIMIC-III dataset (Johnson et al., 2016; Goldberger et al., 2000; Johnson et al., 2019) is collected from the critical care units of the Beth Israel Deaconess Medical Center... |
| Dataset Splits | Yes | MNIST: We first split the training set of the original MNIST (Lecun et al., 1998) into 90/10 train/validation split. Training/test split is 54000/10000. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used, such as GPU/CPU models or memory specifications. It only mentions that "All experiments are carried out with Py Torch (Paszke et al., 2019)." |
| Software Dependencies | No | All experiments are carried out with Py Torch (Paszke et al., 2019). (The version number for PyTorch is not specified, nor for other libraries like ADAM or BERT.) |
| Experiment Setup | Yes | We train the model with ADAM (Kingma & Ba, 2014) with a learning rate of 1e-2 and batch size of 128 for 50 epochs. (for MNIST), We train the model with ADAM (Kingma & Ba, 2014) with a learning rate of 1e-5 and batch size of 16 for 50 epochs. (for MIMIC), and We train the model with ADAM (Kingma & Ba, 2014) with a learning rate of 1e-5 and batch size of 128 for 20 epochs. (for Claim). |