Locally-Adaptive Nonparametric Online Learning
Authors: Ilja Kuzborskij, Nicolò Cesa-Bianchi
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We fill this gap by introducing efficient online algorithms (based on a single versatile master algorithm) each adapting to one of the following regularities: (i) local Lipschitzness of the competitor function, (ii) local metric dimension of the instance sequence, (iii) local performance of the predictor across different regions of the instance space. Extending previous approaches, we design algorithms that dynamically grow hierarchical ε-nets on the instance space whose prunings correspond to different locality profiles for the problem at hand. Using a technique based on tree experts, we simultaneously and efficiently compete against all such prunings, and prove regret bounds each scaling with a quantity associated with a different type of local regularity. When competing against simple locality profiles, our technique delivers regret bounds that are significantly better than those proven using the previous approach. On the other hand, the time dependence of our bounds is not worse than that obtained by ignoring any local regularities. |
| Researcher Affiliation | Collaboration | Ilja Kuzborskij Deep Mind iljak@google.com Nicol o Cesa-Bianchi Dept. of Computer Science & DSRC University of Milan, Italy nicolo.cesa-bianchi@unimi.it |
| Pseudocode | Yes | Algorithm 1 contains the pseudocode for the case of exp-concave loss functions. ... Subroutine 2 propagate. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | No | The paper mentions a 'toy one-dimensional case' for empirical illustration but does not provide concrete access information (link, DOI, citation, etc.) for any publicly available or open dataset. |
| Dataset Splits | No | The paper does not provide specific dataset split information (percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce data partitioning. |
| Hardware Specification | No | The paper mentions empirical illustration in a 'toy one-dimensional case' but does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | The paper describes algorithmic inputs like 'radius-tuning function ρ(k, t)' and mentions 'parameter-free approach' for classification, but it does not provide specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings for reproducing the described empirical illustration. |