Thompson Sampling for Robust Transfer in Multi-Task Bandits
Authors: Zhi Wang, Chicheng Zhang, Kamalika Chaudhuri
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we evaluate the algorithm on synthetic data and show that the TS-type algorithm enjoys superior empirical performance in comparison with the UCB-based algorithm and a baseline algorithm that performs TS for each individual task without transfer. |
| Researcher Affiliation | Collaboration | 1University of California San Diego 2University of Arizona 3Facebook AI Research. |
| Pseudocode | Yes | Algorithm 1 ROBUSTAGG-TS (ϵ) |
| Open Source Code | Yes | Our code is available at https://github.com/ zhiwang123/eps-MPMAB-TS. |
| Open Datasets | No | The paper states that 'The algorithms were evaluated on randomly generated 0.15-MPMAB problem instances with different numbers of subpar arms,' indicating synthetic data generation without providing access information for a pre-existing public dataset. |
| Dataset Splits | No | The paper describes running algorithms on synthetic data for a given number of rounds (T = 50,000) but does not specify any training, validation, or test dataset splits. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models, memory, or types of computing resources used for the experiments. |
| Software Dependencies | No | The paper does not list specific versions for software dependencies, such as programming languages, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | Experimental Setup. We compared the performance of 4 algorithms: (1) ROBUSTAGG-TS(ϵ) with constants c1 = 1 2 and c2 = 1; (2) ROBUSTAGG(ϵ) (Wang et al., 2021, Section 6.1); (3) IND-TS, the baseline algorithm that runs TS with Gaussian priors for each player individually; and (4) IND-UCB, the baseline algorithm that runs UCB-1 for each player individually. The algorithms were evaluated on randomly generated 0.15-MPMAB problem instances with different numbers of subpar arms. To stay consistent with the work of Wang et al. (2021), we followed the same instance generation procedure and considered I5ϵ to be the set of subpar arms we set the number of players M = 20 and the number of arms K = 10; then, for each integer value v [0, 9], we generated 30 0.15-MPMAB problem instances with Bernoulli reward distributions and |I5ϵ| = v. We ran the algorithms on each instance for a horizon of T = 50, 000 rounds. |