Adversarial Online Learning with noise
Authors: Alon Resler, Yishay Mansour
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main results are tight regret bounds for learning with noise in the adversarial online learning model. Our main contribution is deriving tight regret bounds for those settings, both upper bounds (algorithms) and lower bounds (impossibility results). |
| Researcher Affiliation | Collaboration | 1Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel 2Google Research, Israel. |
| Pseudocode | Yes | Algorithm 1 Exponential Weights Scheme |
| Open Source Code | No | The paper does not include any explicit statement about providing open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper is theoretical and derives regret bounds. It does not conduct empirical experiments using a specific dataset, therefore, there is no mention of a publicly available training dataset. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper is theoretical and focuses on mathematical derivations and algorithm design. It does not report on empirical experiments that would require specifying hardware used. |
| Software Dependencies | No | The paper is theoretical and does not describe empirical experiments that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not detail an experimental setup, concrete hyperparameter values, or training configurations as it does not conduct empirical experiments. |