Dynamic Subgroup Identification in Covariate-adjusted Response-adaptive Randomization Experiments

Authors: Yanping Li, Jingshen Wang, Waverly Wei

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we investigate the performance of our proposed design strategy for identifying the tie set of best-performing subgroups in a synthetic case study using clinical trial data. We compare our proposed design strategy with the complete randomization design and two multi-armed bandit (MAB) algorithms. To evaluate the performance of different design strategies, we assess the effectiveness of each adaptive experiment strategy from two aspects. First, we compare the correct selection probability of identifying the best subgroups. ... Second, we compare the 95% confidence interval, N-scaled bias, and standard deviation of the estimated best subgroup treatment effect. ... The synthetic case study results are summarized in Figure 1 and Table 1.
Researcher Affiliation Academia Yanping Li School of Statistics and Data Science Nankai University yanpingli@mail.nankai.edu.cn Jingshen Wang Division of Biostatistics University of California, Berkeley jingshenwang@berkeley.edu Waverly Wei Department of Data Sciences and Operations University of Southern California waverly@marshall.usc.edu
Pseudocode Yes Algorithm 1 Dynamic subgroup identification CARA design
Open Source Code No No direct link to an open-source code repository or specific statement of code release within the paper's text was found. The NeurIPS checklist in the provided text indicates 'Yes' for providing open access to data and code, referring to 'Section 6 and Supplementary Materials', but no explicit access information is present within the analyzed text.
Open Datasets Yes We design our synthetic case study using the dataset from the Mayo Clinic s trial on primary biliary cirrhosis (PBC)... This dataset includes 424 patients... [26].
Dataset Splits No The paper describes a sequential enrollment process for participants in a clinical trial simulation over 'T stages', where 'accumulated experimental data' is used to 'dynamically revise treatment allocation'. This is a continuous adaptation process rather than a static train/validation/test dataset split. No explicit 'validation' split is mentioned.
Hardware Specification Yes All experiments are conducted with an Intel Core i7-11800H CPU and 16 GB of RAM.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) were explicitly stated.
Experiment Setup Yes Here, we set T = 15 and nt = 400, for t = 1, . . . , T. All experiments are conducted with an Intel Core i7-11800H CPU and 16 GB of RAM. In our resampling procedure, we set B = 2, 000. Here, we set ϵ = 0.1. We choose γ = 0.05 in our simulation studies.