Problem Dependent View on Structured Thresholding Bandit Problems
Authors: James Cheshire, Pierre Menard, Alexandra Carpentier
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Appendix ?? we conduct some preliminary experiments to explore how our theoretical results translate in practice. All proofs are found in the Appendix. |
| Researcher Affiliation | Academia | 1Otto von Guericke University Magdeburg. Correspondence to: James Cheschire <james.cheschire@ovgu.de>. |
| Pseudocode | Yes | Algorithm 1 PD-MTB; Algorithm 2 Grad-Explore; Algorithm 3 PD-CTB |
| Open Source Code | No | The paper does not provide any explicit statements about the release of source code or links to a code repository. |
| Open Datasets | No | The paper focuses on a theoretical bandit problem setting and describes general problem formulations (e.g., 'K-armed bandit problem', 'unknown distribution νk'). It mentions 'preliminary experiments' but provides no specific details, names, links, or citations for any publicly available datasets used. |
| Dataset Splits | No | The paper does not specify any training, validation, or test dataset splits. The research is primarily theoretical with preliminary experiments mentioned but not detailed in terms of data splits. |
| Hardware Specification | No | The paper mentions 'preliminary experiments' but does not provide any specific details about the hardware used to conduct these experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper mentions 'preliminary experiments' but does not provide specific version numbers for any software dependencies (e.g., programming languages, libraries, frameworks) used in these experiments. |
| Experiment Setup | No | The paper describes the theoretical setup of the Thresholding Bandit Problem and the parameters of its proposed algorithms (e.g., T1 and T2 derived from T and K). However, it does not provide specific experimental setup details such as hyperparameters, optimization settings, or system-level training configurations that would be used in an empirical implementation of the algorithms. |