Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Stochastic Multi-armed Bandits: Optimal Trade-off among Optimality, Consistency, and Tail Risk
Authors: David Simchi-Levi, Zeyu Zheng, Feng Zhu
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Last, a brief account of experiments are conducted to illustrate our theoretical findings. We discuss how tuning parameters affect the performance of our policy, and reiterate the insight that relaxing worstcase optimality and instance-dependent consistency (or allowing sub-optimality and inconsistency) may leave space for regret distribution being more light-tailed. We study the empirical performance via synthetic simulations, and present insights of our results. |
| Researcher Affiliation | Academia | David Simchi-Levi Institute for Data, Systems, and Society Massachusetts Institute of Technology Cambridge, MA 02139 EMAIL Zeyu Zheng Industrial Engineering & Operations Research University of California, Berkeley Berkeley, CA 94720 EMAIL Feng Zhu Institute for Data, Systems, and Society Massachusetts Institute of Technology Cambridge, MA 02139 EMAIL |
| Pseudocode | Yes | Algorithm 1 Successive Elimination with Random Permutation (SEw RP) |
| Open Source Code | No | The paper states |
| Open Datasets | No | The paper mentions |
| Dataset Splits | No | The paper focuses on theoretical analysis and synthetic simulations. It does not provide specific details on dataset splits (e.g., percentages, counts, or references to standard splits) for training, validation, or testing. |
| Hardware Specification | No | The paper discusses |
| Software Dependencies | No | The paper details algorithms and theoretical proofs for multi-armed bandit problems. While it mentions |
| Experiment Setup | No | The paper mentions that |