Statistical Inference with M-Estimators on Adaptively Collected Data

Authors: Kelly Zhang, Lucas Janson, Susan Murphy

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Figure 1 we plot the empirical distributions of the z-statistic for the least-squares estimator both with and without adaptive weighting. We consider a two-armed bandit with At 2 {0, 1}. ... In Figure 4 we plot the empirical coverage probabilities and volumes of 90% confidence regions for (P) := [ 1(P) in both the continuous and binary reward settings.
Researcher Affiliation Academia Kelly W. Zhang Department of Computer Science Harvard University kellywzhang@seas.harvard.edu Lucas Janson Departments of Statistics Harvard University ljanson@fas.harvard.edu Susan A. Murphy Departments of Statistics and Computer Science Harvard University samurphy@fas.harvard.edu
Pseudocode No The paper describes algorithms and methods in text and mathematical formulas but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statements about releasing code or links to a code repository.
Open Datasets No The paper describes generating its own data for simulations: 'In both simulation settings we collect data using Thompson Sampling with a linear model for the expected reward and normal priors'. It does not use or provide access information for a public or open dataset.
Dataset Splits No The paper describes generating data for its simulations ('In both simulation settings we collect data using Thompson Sampling') but does not specify any training, validation, or test dataset splits.
Hardware Specification No The paper does not contain any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud resources) used for running the experiments.
Software Dependencies No The paper describes the methods and models used (e.g., least-squares estimators, maximum likelihood estimators, Thompson Sampling) but does not list any specific software or library names with version numbers.
Experiment Setup Yes In the continuous reward setting, we use least-squares estimators with a correctly specified model for the expected reward, i.e., M-estimators with m (Rt, Xt, At) = (Rt X> t 1)2. ... In both simulation settings we collect data using Thompson Sampling with a linear model for the expected reward and normal priors ... We constrain the action selection probabilities with clipping at a rate of 0.05.