Improved Pure Exploration in Linear Bandits with No-Regret Learning
Authors: Mohammadi Zaki, Avi Mohan, Aditya Gopalan
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also provide numerical results for the proposed algorithm consistent with its theoretical guarantees. We numerically evaluate PELEG against the algorithms XYstatic [Soare et al., 2014], LUCB [Kalyanakrishnan et al., 2012], ALBA [Tao et al., 2018], Lin Gap E [Liyuan Xu, 2017], RAGE [Fiez et al., 2019], Lazy-TS [Jedra and Prouti ere, 2020] and Lin Game [Degenne et al., 2020], for 3 common benchmark settings. |
| Researcher Affiliation | Academia | 1Indian Institute of Science, Bangalore 2Boston University, MA |
| Pseudocode | Yes | Algorithm 1 Phased Elimination Linear Exploration (PELEG) |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository. |
| Open Datasets | No | The paper describes generating synthetic datasets for its experiments (e.g., 'The arm set is the standard basis', 'The arms set comprises of 100 vectors sampled uniformly'). It does not refer to or provide access information for any publicly available or open datasets. |
| Dataset Splits | No | The paper describes generating synthetic data and evaluating its algorithm. It does not mention training, validation, or test dataset splits in the context of using a pre-existing dataset or defining such splits for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processors, or memory used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for reproducibility (e.g., Python 3.x, PyTorch 1.x, or specific solver versions). |
| Experiment Setup | No | The paper describes the algorithm's design and theoretical properties but does not provide specific experimental setup details such as hyperparameters (learning rates, batch sizes, epochs) or system-level training configurations relevant to replicating typical machine learning experiments. |