Anonymous Bandits for Multi-User Systems
Authors: Hossein Esfandiari, Vahab Mirrokni, Jon Schneider
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we perform simulations of our anonymous bandits algorithms the explore-then-commit algorithm (Algorithm 2) and several variants of Algorithm 1 with different decomposition algorithms on synthetic data. We observe that both the randomized decomposition and LP decomposition based variants of Algorithm 1 significantly outperform the explore-then-commit algorithm and the greedy decomposition variant, as predicted by our theoretical bounds. We discuss these in more detail in Section B of the Supplemental Material. |
| Researcher Affiliation | Industry | Hossein Esfandiari Google Research esfandiari@google.com Vahab Mirrokni Google Research mirrokni@google.com Jon Schneider Google Research jschnei@google.com |
| Pseudocode | Yes | Algorithm 1: Low-regret algorithm for anonymous bandits. Algorithm 2: Explore-then-commit algorithm for anonymous bandits without a user-clustering assumption. |
| Open Source Code | Yes | Code for the simulations is included in the Supplemental Material |
| Open Datasets | No | Finally, we perform simulations of our anonymous bandits algorithms the explore-then-commit algorithm (Algorithm 2) and several variants of Algorithm 1 with different decomposition algorithms on synthetic data. The paper does not mention using any publicly available datasets with concrete access information. |
| Dataset Splits | No | The paper performs simulations on synthetic data but does not specify any training/validation/test splits, nor does it refer to predefined splits from external datasets. |
| Hardware Specification | No | The simulations performed in this work were not computationally intensive, so this is not of particular interest (combined, they took about an hour on a laptop). While a 'laptop' is mentioned, no specific model or detailed specifications are provided. |
| Software Dependencies | No | The paper does not provide specific version numbers for any ancillary software dependencies. |
| Experiment Setup | No | The paper discusses the algorithms and their theoretical bounds, and mentions simulations on synthetic data, but it does not provide specific details about the experimental setup such as hyperparameter values, learning rates, batch sizes, or training schedules. |