Adaptive Instrument Design for Indirect Experiments
Authors: Yash Chandak, Shiv Shankar, Vasilis Syrgkanis, Emma Brunskill
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments conducted in various domains inspired by real-world applications, we showcase how our method can significantly improve the sample efficiency of indirect experiments. and 5 EXPERIMENTS |
| Researcher Affiliation | Academia | Yash Chandak Stanford University Shiv Shankar UMass Amherst Vasilis Syrgkanis Stanford University Emma Brunskill Stanford University |
| Pseudocode | Yes | A pseudo-code for the proposed algorithm is provided in Appendix H. and H.1 PSEUDO-CODE with Algorithm 1: DIA: Designing Instruments Adaptively and Algorithm 2: Optimize. |
| Open Source Code | No | The paper states 'The original data-generating process for Trip Advisor domain is available on Github' and 'Our semi-synthetic version is available in the supplementary material,' which refer to data or simulators, but there is no explicit statement or link confirming the release of the source code for the proposed DIA methodology itself. |
| Open Datasets | Yes | We use the simulator built from Trip Advisor customer data (Syrgkanis et al., 2019). and The original data-generating process for Trip Advisor domain is available on Github. |
| Dataset Splits | No | The paper describes data generation processes for synthetic domains and mentions a 'held-out dataset with M samples' for MSE estimation, but it does not specify explicit train/validation/test dataset split percentages or sample counts for the data used in their experiments, nor does it reference predefined splits. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory amounts, or detailed computer specifications used for running the experiments. |
| Software Dependencies | No | The paper cites JAX (Bradbury et al., 2018) and PyTorch (Paszke et al., 2019) as tools used, but it does not explicitly state the specific version numbers of these or any other software dependencies crucial for replicating the experiments. |
| Experiment Setup | Yes | For the CIV setting... The instrument sampling policy πφ( |x) is modeled using a two-layered neural network with 8 hidden units. For the Trip Advisor setting... The non-linear function is a two-layered neural network with 8 hidden nodes and sigmoid activation units. Parameters for both g and d P are obtained using gradient descent... The instrument sampling policy πφ( |x) is modeled using a two-layered neural network with 8 hidden units. The choice of sub-sampling size k is used as a hyper-parameter in our work. We parametrize k as nα. For our empirical results, we searched over the hyper-parameter settings of α [0.5, 0.8]. |