Luckiness in Multiscale Online Learning
Authors: Wouter M. Koolen, Muriel F. Pérez-Ortiz
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate experimentally the superior performance of our scale-adaptive algorithm and discuss the subtle relationship of our results to Freund s 2016 open problem. (Abstract) and Section 6 Experiments on Synthetic Data |
| Researcher Affiliation | Academia | Muriel Felipe Pérez-Ortiz Centrum Wiskunde & Informatica (CWI) muriel.perez@cwi.nl Wouter M. Koolen CWI and University of Twente wmkoolen@cwi.nl |
| Pseudocode | Yes | Figure 1: MUSCADA and Figure 3: Optimistic MUSCADA, given as update w.r.t. Figure 1. |
| Open Source Code | Yes | Generating this figure with the code in the supplementary material takes 3 seconds on an Intel i7-7700 processor. |
| Open Datasets | No | The paper uses 'Synthetic Data' which they describe how to generate themselves, but does not provide concrete access information (link, DOI, formal citation) for a publicly available or open dataset. |
| Dataset Splits | No | The paper describes generating synthetic data for experiments but does not provide specific train/validation/test dataset splits (percentages or counts). |
| Hardware Specification | Yes | Generating this figure with the code in the supplementary material takes 3 seconds on an Intel i7-7700 processor. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | Parameters: A vector uk > 0 of initial weights, initial strictly positive learning rates η0,k 1/(2σk), and real, continuous nonincreasing functions Hk : R+ 7 R with Hk(0) = 1. ... We take K = 50 experts and set σk = 1/k for each k K. ... For the hard case, we set λk = 0 for all k. For the lucky case, we set λ2 = 1/5 instead. |