Efficient online algorithms for fast-rate regret bounds under sparsity
Authors: Pierre Gaillard, Olivier Wintenberger
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We consider the problem of online convex optimization in two different settings: arbitrary and i.i.d. sequence of convex loss functions. In both settings, we provide efficient algorithms whose cumulative excess risks are controlled with fast-rate sparse bounds. |
| Researcher Affiliation | Academia | Pierre Gaillard INRIA, ENS, PSL Research University Paris, France pierre.gaillard@inria.fr Olivier Wintenberger Sorbonne Université, CNRS, LPSM Paris, France olivier.wintenberger@upmc.fr |
| Pseudocode | Yes | Algorithm 1 Squint BOA with multiple constant learning rates assigned to each parameter... Algorithm 2 SABOA Sparse Acceleration of BOA |
| Open Source Code | No | The paper is theoretical and does not mention releasing any source code or provide any links to code repositories. |
| Open Datasets | No | The paper is purely theoretical and does not use or reference any publicly available datasets for training or evaluation. |
| Dataset Splits | No | The paper is theoretical and does not describe any dataset splits for validation or other purposes. |
| Hardware Specification | No | The paper is theoretical and does not mention any hardware specifications used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not provide specific details about an experimental setup, such as hyperparameters or training configurations. |