Learning the Learning Rate for Prediction with Expert Advice
Authors: Wouter M. Koolen, Tim van Erven, Peter Grünwald
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Appendix A we describe the data used to generate Figure 1 and explain why the regret obtained by LLR is significantly smaller than the regret of Ada Hedge, FTL and all other tunings described above. |
| Researcher Affiliation | Academia | Wouter M. Koolen Queensland University of Technology and UC Berkeley wouter.koolen@qut.edu.au Tim van Erven Leiden University, the Netherlands tim@timvanerven.nl Peter D. Gr unwald Leiden University and Centrum Wiskunde & Informatica, the Netherlands pdg@cwi.nl |
| Pseudocode | Yes | Algorithm 1 LLR(πah, π ). The grid η1, η2, . . . and weights π1, π2, . . . are defined in (8) and (12). |
| Open Source Code | No | The paper does not provide any links to open-source code or explicit statements about code availability. |
| Open Datasets | No | The paper uses 'Example data (details in Appendix A)' which describes how data was generated for illustrative purposes, but it does not refer to a publicly available or open dataset with access information. |
| Dataset Splits | No | The paper describes generating example data for Figure 1, but does not mention specific training/validation/test dataset splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper describes the LLR algorithm itself, but it does not specify concrete hyperparameter values or system-level training settings for an experimental setup. |