Dying Experts: Efficient Algorithms with Optimal Regret Bounds
Authors: Hamid Shayestehmanesh, Sajjad Azami, Nishant A. Mehta
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In both cases, we provide matching upper and lower bounds on the ranking regret in the fully adversarial setting. Furthermore, we present new, computationally efficient algorithms that obtain our optimal upper bounds. |
| Researcher Affiliation | Academia | Hamid Shayestehmanesh Department of Computer Science University of Victoria Sajjad Azami Department of Computer Science University of Victoria Nishant A. Mehta Department of Computer Science University of Victoria {hamidshayestehmanesh, sajjadazami, nmehta}@uvic.ca |
| Pseudocode | Yes | Algorithm 1: Hedge-Perm-Unknown (HPU) and Algorithm 2: Hedge-Perm-Known (HPK) |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code, nor does it include links to a code repository or mention code in supplementary materials. |
| Open Datasets | No | This is a theoretical paper focusing on regret bounds and algorithms. It does not utilize or refer to any publicly available datasets for training or evaluation. |
| Dataset Splits | No | This is a theoretical paper and does not involve empirical validation with dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and does not describe any hardware specifications used for running experiments. |
| Software Dependencies | No | The paper is theoretical and does not list specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not include details about an experimental setup, such as hyperparameters or system-level training settings. |