Delay and Cooperation in Nonstochastic Linear Bandits

Authors: Shinji Ito, Daisuke Hatano, Hanna Sumita, Kei Takemura, Takuro Fukunaga, Naonori Kakimura, Ken-Ichi Kawarabayashi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper offers a nearly optimal algorithm for online linear optimization with delayed bandit feedback. ... This algorithm achieves nearly optimal performance, as we are able to show that arbitrary algorithms suffer the regret of ( m(m + d)T) in the worst case. To develop the algorithm, we introduce a technique we refer to as distribution truncation, which plays an essential role in bounding the regret.
Researcher Affiliation Collaboration Shinji Ito NEC Corporation i-shinji@nec.com Daisuke Hatano RIKEN AIP daisuke.hatano@riken.jp Hanna Sumita Tokyo Institute of Technology sumita@c.titech.ac.jp Kei Takemura NEC Corporation kei_takemura@nec.com Takuro Fukunaga Chuo University, RIKEN AIP, JST PRESTO fukunaga.07s@g.chuo-u.ac.jp Naonori Kakimura Keio University kakimura@math.keio.ac.jp Ken-ichi Kawarabayashi National Institute of Informatics k-keniti@nii.ac.jp
Pseudocode Yes Algorithm 1 An algorithm for online linear optimization with delayed bandit feedback
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No This paper is a theoretical work and does not use datasets for training. Therefore, it does not provide information about publicly available datasets.
Dataset Splits No This paper is a theoretical work and does not use datasets for validation. Therefore, it does not provide information about dataset splits.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or the specific hardware used.
Software Dependencies No The paper is theoretical and does not describe any experimental setup or software dependencies with specific version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup, hyperparameters, or system-level training settings.