A unified framework for bandit multiple testing
Authors: Ziyu Xu, Ruodu Wang, Aaditya Ramdas
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform simulations for the sub-Gaussian setting discussed in Section 4 to demonstrate that our version of Algorithm 1 using e-variables is empirically as efο¬cient as the algorithm of JJ, which uses p-variables (code available here). |
| Researcher Affiliation | Academia | Ziyu Xu Department of Statistics and Data Science Carnegie Mellon University, USA xzy@cmu.edu Ruodu Wang Department of Statistics and Actuarial Science University of Waterloo, Canada wang@uwaterloo.ca Aaditya Ramdas Department of Statistics and Data Science Machine Learning Department Carnegie Mellon University, USA aramdas@cmu.edu |
| Pseudocode | Yes | Algorithm 1: A meta-algorithm for bandit multiple testing that decouples exploration and evidence. |
| Open Source Code | Yes | We perform simulations for the sub-Gaussian setting discussed in Section 4 to demonstrate that our version of Algorithm 1 using e-variables is empirically as efο¬cient as the algorithm of JJ, which uses p-variables (code available here) |
| Open Datasets | No | Let ππ=π©(ππ,1) where ππ=π0 =0 if π β0 and ππ=1/2 if π β1. We consider 3 setups, where we set the number of non-null hypotheses to be |β1|=2,logπ, and π, to see the effect of different magnitudes of non-null hypotheses on the sample complexity of each method. |
| Dataset Splits | No | The paper describes generating synthetic data for simulations but does not specify any training, validation, or test dataset splits. It focuses on the theoretical properties and empirical performance on simulated scenarios. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments or simulations. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers used for the experiments. |
| Experiment Setup | Yes | Simulation setup Let ππ=π©(ππ,1) where ππ=π0 =0 if π β0 and ππ=1/2 if π β1. We consider 3 setups, where we set the number of non-null hypotheses to be |β1|=2,logπ, and π, to see the effect of different magnitudes of non-null hypotheses on the sample complexity of each method. We set πΏ=0.05 and compare 4 different methods. We compare the same two different exploration components for both e-variables and p-variables. The ο¬rst exploration component we consider is simply uniform sampling across each arm (Uni). The second is the UCB sampling strategy described in (5a). When using BH, our formulation for p-variables is (5b), which is the same as JJ. Like JJ, we set π=πJJ in our simulations. When using e-BH, we set our e-variables to πΈPM-H π,π‘ := ππ(π‘) π=1 exp(ππ,π‘π(π)(ππ,π‘π(π) π0) π2 π,π‘π(π)/2) 2log(2/πΌ) ππ(π‘)log(ππ(π‘)+1), which is the default choice of ππ,π‘suggested in Waudby-Smith and Ramdas [45]. |