Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Jump Interval-Learning for Individualized Decision Making with Continuous Treatments

Authors: Hengrui Cai, Chengchun Shi, Rui Song, Wenbin Lu

JMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive simulations and a real data application to a Warfarin study are conducted to demonstrate the empirical validity of the proposed I2DR.
Researcher Affiliation Academia Hengrui Cai EMAIL Department of Statistics University of California Irvine Irvine, CA 92697, USA Chengchun Shi EMAIL Department of Statistics London School of Economics and Political Science London, WC2A 2AE, UK Rui Song EMAIL Wenbin Lu EMAIL Department of Statistics North Carolina State University Raleigh, NC 27695, USA
Pseudocode Yes A pseudocode containing more details is given in Algorithm 1. Algorithm 1: Jump interval-learning. Algorithm 2: Calculation of the cost function.
Open Source Code Yes An R package implementing our proposed I2DR is available on CRAN at https://cran.r-project.org/web/packages/JQL/index.html.
Open Datasets Yes In this section, we illustrate the empirical performance of our proposed method on real data from the International Warfarin Pharmacogenetics Consortium (Consortium, 2009).
Dataset Splits Yes Specifcally, we randomly select 70% of the data to compute the proposed I2DR and the IDR obtained by K-O-L, and evaluate their value functions using the remaining dataset.
Hardware Specification Yes The computing infrastructure used is a virtual machine containing the second generation Intel Xeon Scalable Processors with 16 processor cores and 64GB memory in the AWS Platform.
Software Dependencies No In our implementation, we apply the Multi-layer Perceptron (MLP) regressor (Pedregosa et al., 2011) for parameter estimation. We refer to the resulting optimization as deep jump interval-learning (D-JIL).
Experiment Setup Yes For each scenario, we set p = 4 and consider three different choices of the sample size, corresponding to n = 200, 400, 800. We apply the proposed L-JIL and D-JIL to both scenarios. The detailed implementation is discussed in Section 3.3. We set m = n/5, λn = 0, γn = 4n 1 log(n), and construct the CI for V opt based on the procedure described in Section 4.1.3.