A unified framework for bandit multiple testing

Authors: Ziyu Xu, Ruodu Wang, Aaditya Ramdas

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform simulations for the sub-Gaussian setting discussed in Section 4 to demonstrate that our version of Algorithm 1 using e-variables is empirically as efficient as the algorithm of JJ, which uses p-variables (code available here).
Researcher Affiliation Academia Ziyu Xu Department of Statistics and Data Science Carnegie Mellon University, USA xzy@cmu.edu Ruodu Wang Department of Statistics and Actuarial Science University of Waterloo, Canada wang@uwaterloo.ca Aaditya Ramdas Department of Statistics and Data Science Machine Learning Department Carnegie Mellon University, USA aramdas@cmu.edu
Pseudocode Yes Algorithm 1: A meta-algorithm for bandit multiple testing that decouples exploration and evidence.
Open Source Code Yes We perform simulations for the sub-Gaussian setting discussed in Section 4 to demonstrate that our version of Algorithm 1 using e-variables is empirically as efficient as the algorithm of JJ, which uses p-variables (code available here)
Open Datasets No Let πœˆπ‘–=𝒩(πœ‡π‘–,1) where πœ‡π‘–=πœ‡0 =0 if 𝑖 β„‹0 and πœ‡π‘–=1/2 if 𝑖 β„‹1. We consider 3 setups, where we set the number of non-null hypotheses to be |β„‹1|=2,logπ‘˜, and π‘˜, to see the effect of different magnitudes of non-null hypotheses on the sample complexity of each method.
Dataset Splits No The paper describes generating synthetic data for simulations but does not specify any training, validation, or test dataset splits. It focuses on the theoretical properties and empirical performance on simulated scenarios.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments or simulations.
Software Dependencies No The paper does not list specific software dependencies with version numbers used for the experiments.
Experiment Setup Yes Simulation setup Let πœˆπ‘–=𝒩(πœ‡π‘–,1) where πœ‡π‘–=πœ‡0 =0 if 𝑖 β„‹0 and πœ‡π‘–=1/2 if 𝑖 β„‹1. We consider 3 setups, where we set the number of non-null hypotheses to be |β„‹1|=2,logπ‘˜, and π‘˜, to see the effect of different magnitudes of non-null hypotheses on the sample complexity of each method. We set 𝛿=0.05 and compare 4 different methods. We compare the same two different exploration components for both e-variables and p-variables. The first exploration component we consider is simply uniform sampling across each arm (Uni). The second is the UCB sampling strategy described in (5a). When using BH, our formulation for p-variables is (5b), which is the same as JJ. Like JJ, we set πœ™=πœ™JJ in our simulations. When using e-BH, we set our e-variables to 𝐸PM-H 𝑖,𝑑 := 𝑇𝑖(𝑑) 𝑗=1 exp(πœ†π‘–,𝑑𝑖(𝑗)(𝑋𝑖,𝑑𝑖(𝑗) πœ‡0) πœ†2 𝑖,𝑑𝑖(𝑗)/2) 2log(2/𝛼) 𝑇𝑖(𝑑)log(𝑇𝑖(𝑑)+1), which is the default choice of πœ†π‘–,𝑑suggested in Waudby-Smith and Ramdas [45].