A Regret-Variance Trade-Off in Online Learning
Authors: Dirk van der Hoeven, Nikita Zhivotovskiy, Nicolò Cesa-Bianchi
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We prove that a variant of EWA either achieves a negative regret (i.e., the algorithm outperforms the best expert), or guarantees a O(log K) bound on both variance and regret. Building on this result, we show several examples of how variance of predictions can be exploited in learning. In the online to batch analysis, we show that a large empirical variance allows to stop the online to batch conversion early and outperform the risk of the best predictor in the class. We also recover the optimal rate of model selection aggregation when we do not consider early stopping. In online prediction with corrupted losses, we show that the effect of corruption on the regret can be compensated by a large variance. In online selective sampling, we design an algorithm that samples less when the variance is large, while guaranteeing the optimal regret bound in expectation. In online learning with abstention, we use a similar term as the variance to derive the first high-probability O(log K) regret bound in this setting. Finally, we extend our results to the setting of online linear regression. |
| Researcher Affiliation | Academia | Dirk van der Hoeven dirk@dirkvanderhoeven.com Dept. of Computer Science Università degli Studi di Milano, Italy Nikita Zhivotovskiy zhivotovskiy@berkeley.edu Dept. of Statistics University of California, Berkeley Nicolò Cesa-Bianchi nicolo.cesa-bianchi@unimi.it Dept. of Computer Science Università degli Studi di Milano, Italy |
| Pseudocode | Yes | Algorithm 1: An algorithm for prediction with expert advice Algorithm 2: An algorithm for online linear regression Algorithm 3: Early Stopping online to batch for Model Selection Aggregation |
| Open Source Code | No | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A] |
| Open Datasets | No | The paper does not provide concrete access information (specific link, DOI, repository name, formal citation with authors/year, or reference to established benchmark datasets) for a publicly available or open dataset. The paper is theoretical and does not conduct experiments with specific datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. The paper is theoretical and does not conduct experiments. |
| Hardware Specification | No | Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A] |
| Software Dependencies | No | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A] |
| Experiment Setup | No | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A] |