Bandit Phase Retrieval

Authors: Tor Lattimore, Botao Hao

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study a bandit version of phase retrieval where the learner chooses actions (At)n t=1 in the d-dimensional unit ball and the expected reward is h At, ?i2 with ? 2 Rd an unknown parameter vector. We prove an upper bound on the minimax cumulative regret in this problem of (dpn), which matches known lower bounds up to logarithmic factors and improves on the best known upper bound by a factor of d. We also show that the minimax simple regret is (d/pn) and that this is only achievable by an adaptive algorithm. Our analysis shows that an apparently convincing heuristic for guessing lower bounds can be misleading and that uniform bounds on the information ratio for information-directed sampling [Russo and Van Roy, 2014] are not sufficient for optimal regret.
Researcher Affiliation Industry Tor Lattimore Deep Mind, London lattimore@deepmind.com Botao Hao Deep Mind, London haobotao000@gmail.com
Pseudocode Yes Algorithm 1: The procedure operates in d iterations. The first iteration is implemented in Lines 1 5 and the remaining d 1 iterations in Lines 7 15.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the methodology described.
Open Datasets No The paper is theoretical and does not involve empirical experiments with datasets. It defines a problem setting and proves theoretical bounds, without mentioning specific datasets or their availability.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with datasets. Therefore, it does not specify any training/validation/test dataset splits.
Hardware Specification No The paper is theoretical and focuses on mathematical proofs and algorithm design. It does not mention any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not describe an implementation or experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and proofs rather than empirical experiments. Therefore, it does not provide details on experimental setup such as hyperparameters or training configurations.