Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Best Arm Identification for Contaminated Bandits

Authors: Jason Altschuler, Victor-Emmanuel Brunel, Alan Malek

JMLR 2019 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper studies active learning in the context of robust statistics. Specifically, we propose a variant of the Best Arm Identification problem for contaminated bandits... The primary challenge of the contaminated bandit setting is that the true distributions are only partially identifiable, even with infinite samples. To address this, we develop tight, non-asymptotic sample complexity bounds for high-probability estimation of the first two robust moments... These concentration inequalities are the main technical contributions of the paper and may be of independent interest. Using these results, we adapt several classical Best Arm Identification algorithms to the contaminated bandit setting and derive sample complexity upper bounds for our problem. Finally, we provide matching information-theoretic lower bounds on the sample complexity (up to a small logarithmic factor).
Researcher Affiliation Academia Jason Altschuler EMAIL Victor-Emmanuel Brunel EMAIL Alan Malek EMAIL Massachusetts Institute of Technology 77 Massachusetts Ave, Cambridge, MA 02139
Pseudocode Yes Algorithm 1: Simple uniform exploration algorithm for PIBAI. Algorithm 2: Adaptation of Successive Elimination algorithm for PIBAI. Algorithm 3: Adaptation of successive elimination algorithm (see Algorithm 2) for CBAI.
Open Source Code No The paper does not contain any explicit statements about code availability, links to code repositories, or mentions of code in supplementary materials.
Open Datasets No The paper is theoretical and focuses on developing algorithms and deriving sample complexity bounds. It uses illustrative examples (e.g., Pat's salary survey, drug responses) but does not conduct experiments on specific, named datasets, nor does it provide access information for any data used for empirical validation.
Dataset Splits No The paper is theoretical and does not report on empirical experiments using datasets. Therefore, there is no mention of dataset splits (e.g., training, validation, test splits) required for reproduction.
Hardware Specification No The paper is theoretical, presenting algorithms, proofs, and complexity bounds. It does not describe any experimental setup that would involve specific hardware for computation or data processing.
Software Dependencies No The paper is theoretical and does not describe practical implementations or experimental setups that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical, focused on algorithm design and mathematical analysis. It does not include an experimental section with specific details about hyperparameters, training configurations, or system-level settings.