Fair Adaptive Experiments
Authors: Waverly Wei, Xinwei Ma, Jingshen Wang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through our theoretical investigation, we characterize the convergence rate of the estimated treatment effects and the associated standard deviations at the group level and further prove that our adaptive treatment assignment algorithm, despite not having a closed-form expression, approaches the optimal allocation rule asymptotically. Our proof strategy takes into account the fact that the allocation decisions in our design depend on sequentially accumulated data, which poses a significant challenge in characterizing the properties and conducting statistical inference of our method. We further provide simulation evidence and two synthetic data studies to showcase the performance of our fair adaptive experiment strategy. |
| Researcher Affiliation | Academia | Waverly Wei Division of Biostatistics University of California, Berkeley linqing_wei@berkeley.edu Xinwei Ma Department of Economics University of California, San Diego x1ma@ucsd.edu Jingshen Wang Division of Biostatistics University of California, Berkeley jingshenwang@berkeley.edu |
| Pseudocode | Yes | We present our proposed fair adaptive experiment strategy in Algorithm 1. Algorithm 1 Fair adaptive experiment |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper describes generating synthetic data for its simulations (DGP 1 and DGP 2) rather than using a pre-existing, publicly available dataset with concrete access information. |
| Dataset Splits | No | The paper describes its simulation setup, including sample sizes for stages (e.g., n1 = 40, nt = 1), but does not specify traditional train/validation/test dataset splits. Since the data is synthetically generated, these splits are not applicable in the same way as with fixed datasets. |
| Hardware Specification | No | The paper discusses simulation studies but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run these experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks) used for the experiments. |
| Experiment Setup | Yes | Our simulation design generates the potential outcomes under two data-generating processes. DGP 1: Continuous potential outcomes Yi(d)|Xi Sj N(µd,j, σd,j), where µ1 = (1, 4) , µ0 = (4, 2) , σ1 = (2.5, 1.2) , and σ0 = (1.5, 3.5) . The group proportions are p = (0.5, 0.5) . The group-level treatment effects are τ = ( 3, 2) . DGP 2: Binary potential outcome: Yi(d)|Xi Sj Bernoulli(µd,j), where µ1 = (0.6, 0.2, 0.3, 0.4, 0.1) , µ0 = (0.1, 0.5, 0.3, 0.4, 0.6) . The group proportions are p = (0.15, 0.25, 0.2, 0.25, 0.15) . ... To mimic the fully adaptive experiments, we fix stage 1 sample size at n1 = 40 and nt = 1 for t = 2, . . . , T, where the total number of stages ranges from T {40, . . . , 400}. |