Parallelizing Thompson Sampling

Authors: Amin Karbasi, Vahab Mirrokni, Mohammad Shadravan

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also demonstrate experimentally that dynamic batch allocation dramatically outperforms natural baselines such as static batch allocations.
Researcher Affiliation Collaboration Amin Karbasi Yale University amin.karbasi@yale.edu Vahab Mirrokni Google Research mirrokni@google.com Mohammad Shadravan Yale University mohammad.shadravan@yale.edu
Pseudocode Yes Algorithm 1 Batch Thompson Sampling
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability for the described methodology. The ethics review section states N/A for code availability.
Open Datasets No The paper mentions using 'Movie Lens data set' for experiments but does not provide a specific link, DOI, repository name, or a formal citation with author names and year for public access to this dataset.
Dataset Splits No The paper does not provide specific dataset split information (e.g., percentages, sample counts for train/validation/test sets, or references to predefined splits) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. The ethics review section also indicates N/A for compute resources.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, used for replicating the experiments.
Experiment Setup Yes For MOTS, we set ρ = 0.9999 and α = 2 as suggested by Jin et al. [2020]. For the parameters, we set δ = 0.61, σ = 0.01, and ϵ = 0.71 as suggested by Beygelzimer et al. [2011].