Equipping Experts/Bandits with Long-term Memory
Authors: Kai Zheng, Haipeng Luo, Ilias Diakonikolas, Liwei Wang
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We propose the first reduction-based approach to obtaining long-term memory guarantees for online learning in the sense of Bousquet and Warmuth [8], by reducing the problem to achieving typical switching regret. Specifically, for the classical expert problem with K actions and T rounds, using our framework we develop various algorithms with a regret bound of order O( p T(S ln T + n ln K)) compared to any sequence of experts with S 1 switches among n min{S, K} distinct experts. In addition, by plugging specific adaptive algorithms into our framework we also achieve the best of both stochastic and adversarial environments simultaneously. This resolves an open problem of Warmuth and Koolen [35]. Furthermore, we extend our results to the sparse multi-armed bandit setting and show both negative and positive results for long-term memory guarantees. As a side result, our lower bound also implies that sparse losses do not help improve the worst-case regret for contextual bandits, a sharp contrast with the non-contextual case. |
| Researcher Affiliation | Academia | 1 Key Laboratory of Machine Perception, MOE, School of EECS, Peking University 2 Center for Data Science, Peking University 3 University of Southern California 4 University of Wisconsin-Madison |
| Pseudocode | Yes | Algorithm 1: A Simple Reduction for Long-term Memory Algorithm 2: A Parameter-free Reduction for Best-of-both-worlds Algorithm 3: A Sparse MAB Algorithm with Long-term Memory |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide links to a code repository for the described methodology. |
| Open Datasets | No | The paper is theoretical, presenting algorithms and regret bounds, and does not involve empirical evaluation on datasets. Therefore, it does not mention or provide access to any training datasets. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical evaluation on datasets, thus no information on training, validation, or test splits is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe empirical experiments requiring specific hardware. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not provide details about software dependencies or specific version numbers required for implementation or reproduction. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and proofs, rather than empirical experiment setups. Therefore, it does not include specific details such as hyperparameters or training configurations. |