Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Online Balanced Experimental Design
Authors: David Arbour, Drew Dimmery, Tung Mai, Anup Rao
ICML 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we provide simulation evidence on the efficacy of our proposed methods. |
| Researcher Affiliation | Collaboration | 1Adobe Research, San Jose, CA, USA 2Data Science @ University of Vienna, Vienna, AT. |
| Pseudocode | Yes | Algorithm 1, takes i = 1, . . . , n unit vectors in sequentially and assigns them to a treatment and control |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the methodology described. |
| Open Datasets | No | All data generating processes used in simulations are shown in Table A1. If not otherwise specified, the sample size is 1000 subjects, the number of groups is two and the marginal probability of treatment is 1/2. |
| Dataset Splits | No | The paper describes simulation studies with various data generating processes but does not specify explicit train/validation/test splits common in ML model training. |
| Hardware Specification | Yes | All timings performed on a ml.r5.2xlarge instance of Amazon Sage Maker. |
| Software Dependencies | No | The paper describes algorithms and simulations but does not explicitly list specific software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9'). |
| Experiment Setup | Yes | If not otherwise specified, the sample size is 1000 subjects, the number of groups is two and the marginal probability of treatment is 1/2. |