Predict-then-Calibrate: A New Perspective of Robust Contextual LP
Authors: Chunlin Sun, Linyu Liu, Xiaocheng Li
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments further reinforce the advantage of the predict-then-calibrate paradigm in that an improvement on either the prediction model or the calibration model will lead to a better final performance. In this section, we illustrate the performance of our proposed algorithms via one simple example and also a shortest path problem considered in Elmachtoub and Grigas [2022] and Hu et al. [2022]. |
| Researcher Affiliation | Academia | Chunlin Sun ,1 Linyu Liu ,2 Xiaocheng Li3 1 Institute for Computational and Mathematical Engineering, Stanford University 2 Department of Automation, Tsinghua University 3 Imperial College Business School, Imperial College London |
| Pseudocode | Yes | Algorithm 1 Box Uncertainty Quantification (BUQ) and Algorithm 2 Ellipsoid Uncertainty Quantification (EUQ) are provided with structured steps. |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the methodology described. |
| Open Datasets | No | The paper describes generating its own datasets for the experiments rather than using a publicly available dataset with concrete access information: 'Here the covariates z = (z1, ..., zd) where zi is sampled from Unif[ 0.5, 0.5] independently for i = 1, ..., d, ϵ Unif[ 0.5, 0.5], and c = (sign(z1) + ϵ) p |z1| (c is independent of z2, ..., zd).' and 'The data generation process is as follows. First, we generate a 0-1 matrix Θ R40 d once with random seed 0 and fix it to encode the parameters of the true model, where each entry is generated from a Bernoulli distribution with probability 0.5. Then, the covariate vector zt Rd is generated from N(0, Id) for t = 1, ..., T.' |
| Dataset Splits | Yes | For our methods of PTC-B and PTC-E, we use 60% of the data for training ˆf, 20% for preliminary calibration (D1), and 20% for final adjustment (D2). Randomly split the validation data into two index sets D1 D2 = {1, ..., T} and D1 D2 = . |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running experiments (e.g., GPU/CPU models, memory specifications, or cloud resources with specs). |
| Software Dependencies | No | The paper mentions machine learning models used (e.g., Kernel Ridge, Neural Network) but does not provide specific version numbers for any software libraries, frameworks, or solvers (e.g., Python, PyTorch, TensorFlow, CPLEX versions). |
| Experiment Setup | Yes | For our methods of PTC-B and PTC-E, we use 60% of the data for training ˆf, 20% for preliminary calibration (D1), and 20% for final adjustment (D2). We select the Kernel Ridge method with the RBF kernel identified as the top-performing model in Figure 3 as the predictive model and the Neural Network (NN) model as the preliminary calibration model for both PTC-B and PTC-E in the ensuing experiments (Figure 4 and Table 1) to compare with other algorithms. |