Orthogonal Bootstrap: Efficient Simulation of Input Uncertainty
Authors: Kaizhao Liu, Jose Blanchet, Lexing Ying, Yiping Lu
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that Orthogonal Bootstrap can significantly improve the result on both simulated and real datasets when the number of Monte Carlo replications is limited. |
| Researcher Affiliation | Academia | 1Department of Mathematics, Peking University, Beijing, China 2Department of Management Science and Engineering, Stanford University 3Department of Mathematics, Stanford University 4Courant Institute of Mathematical Sciences, New York University 5Department of Industrial Engineering and Management Sciences, Northwestern University. |
| Pseudocode | Yes | Algorithm 1 Debiasing via Orthogonal Bootstrap Input: A generic performance measure... |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that open-source code for the methodology is provided. |
| Open Datasets | Yes | Following (Alaa & Van Der Schaar, 2020), we conduct our experiments on 3 UCI benchmark datasets for regression: yacht hydrodynamics, energy efficiency (Dua & Graff, 2017) and kin8nm. |
| Dataset Splits | Yes | For the Yacht dataset, we utilize a batch size of 64 and train for 500 epochs. We use 80% data for training and 20% for testing. For the Energy dataset, we utilize a batch size of 128 and train for 250 epochs. We use 70% data for training and 30% for testing. For the kin8nm dataset, we utilize a batch size of 256 and train for 150 epochs. We use 95% data for training and 5% for testing. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used for its experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper mentions software like Tensor Flow and Pytorch, and the Adam optimizer, but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | For all real data examples, we employ the Adam optimizer with default hyperparameters, with the exception of setting the weight decay to 0.01. The training loss is set to be the squared loss, i.e. L(x, θ) = (fθ(x) y)2, where fθ(x) is parameterized by a two-layer neural network with hidden dimension 100. For the Yacht dataset, we utilize a batch size of 64 and train for 500 epochs. For the Energy dataset, we utilize a batch size of 128 and train for 250 epochs. For the kin8nm dataset, we utilize a batch size of 256 and train for 150 epochs. |