Deep Jump Learning for Off-Policy Evaluation in Continuous Treatment Settings
Authors: Hengrui Cai, Chengchun Shi, Rui Song, Wenbin Lu
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method is further justiļ¬ed by theoretical results, simulations, and a real application to Warfarin Dosing. |
| Researcher Affiliation | Academia | Hengrui Cai North Carolina State University Raleigh, USA hcai5@ncsu.edu Chengchun Shi London School of Economics and Political Science London, UK C.Shi7@lse.ac.uk Rui Song North Carolina State University Raleigh, USA rsong@ncsu.edu Wenbin Lu North Carolina State University Raleigh, USA wlu4@ncsu.edu |
| Pseudocode | Yes | We give the detailed pseudocode in Algorithm 1 in Appendix B due to page limit. |
| Open Source Code | Yes | The code is publicly available at our repository at https://github.com/Hengrui Cai/DJL. |
| Open Datasets | Yes | We use the dataset provided by the International Warfarin Pharmacogenetics [9] for analysis. ... [9] Consortium, I. W. P. [2009], Estimation of the warfarin dose with clinical and pharmacogenetic data , New England Journal of Medicine 360(8), 753 764. |
| Dataset Splits | No | The paper describes a data splitting and cross-fitting strategy but does not provide specific percentages, counts, or explicit predefined splits for training, validation, and testing of the main datasets used. |
| Hardware Specification | Yes | The computing infrastructure used is a virtual machine in the AWS Platform with 72 processor cores and 144GB memory. |
| Software Dependencies | No | The paper mentions using an 'MLP regressor implemented by Pedregosa et al. [36]' (referring to scikit-learn), but it does not specify version numbers for any software, libraries, or dependencies. |
| Experiment Setup | Yes | In our implementation, we set QI to the class of multilayer perceptrons (MLP) for each I. ... We set m = n/10 to achieve a good balance between the absolute error and the computational cost (see Figure 1 in Appendix C for details). |