Learning Decision Policies with Instrumental Variables through Double Machine Learning
Authors: Daqian Shao, Ashkan Soleymani, Francesco Quinzan, Marta Kwiatkowska
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically evaluate DML-IV for IV regression and offline IV bandit problems. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Oxford, UK 2Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USA. |
| Pseudocode | Yes | Algorithm 1 DML-IV with K-fold cross-fitting |
| Open Source Code | Yes | The algorithms are implemented using Py Torch (Paszke et al., 2019), and the code is available on Git Hub3. 3https://github.com/shaodaqian/DML-IV |
| Open Datasets | Yes | We consider two semi-synthetic real-world datasets IHDP4 (Hill, 2011) and PM-CMR5 (Wyatt et al., 2020). |
| Dataset Splits | Yes | we randomly split them into training (63%), validation (27%), and testing (10%) following Wu et al. (2023). |
| Hardware Specification | No | The paper does not specify the hardware used for the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The algorithms are implemented using Py Torch (Paszke et al., 2019), and the code is available on Git Hub3. While PyTorch is mentioned, a specific version number is not provided, nor are other software dependencies with versions. |
| Experiment Setup | Yes | In this section we use DNN estimators for both stages with network architecture and hyper-parameters provided in Appendix F. Additional results of DML-IV using tree-based estimators such as Random Forests and Gradient Boosting are provided in Appendix G.2, where SOTA performance is also demonstrated. |