Stochastic Multi-armed Bandits: Optimal Trade-off among Optimality, Consistency, and Tail Risk

Authors: David Simchi-Levi, Zeyu Zheng, Feng Zhu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Last, a brief account of experiments are conducted to illustrate our theoretical findings. We discuss how tuning parameters affect the performance of our policy, and reiterate the insight that relaxing worstcase optimality and instance-dependent consistency (or allowing sub-optimality and inconsistency) may leave space for regret distribution being more light-tailed. We study the empirical performance via synthetic simulations, and present insights of our results.
Researcher Affiliation Academia David Simchi-Levi Institute for Data, Systems, and Society Massachusetts Institute of Technology Cambridge, MA 02139 dslevi@mit.edu Zeyu Zheng Industrial Engineering & Operations Research University of California, Berkeley Berkeley, CA 94720 zyzheng@berkeley.edu Feng Zhu Institute for Data, Systems, and Society Massachusetts Institute of Technology Cambridge, MA 02139 fengzhu@mit.edu
Pseudocode Yes Algorithm 1 Successive Elimination with Random Permutation (SEw RP)
Open Source Code No The paper states
Open Datasets No The paper mentions
Dataset Splits No The paper focuses on theoretical analysis and synthetic simulations. It does not provide specific details on dataset splits (e.g., percentages, counts, or references to standard splits) for training, validation, or testing.
Hardware Specification No The paper discusses
Software Dependencies No The paper details algorithms and theoretical proofs for multi-armed bandit problems. While it mentions
Experiment Setup No The paper mentions that