No-Regret Learning with Unbounded Losses: The Case of Logarithmic Pooling

Authors: Eric Neyman, Tim Roughgarden

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present an algorithm based on online mirror descent that learns expert weights in a way that attains O(T log T) expected regret as compared with the best weights in hindsight.Our proof has two key ideas.
Researcher Affiliation Academia Eric Neyman Columbia University New York, NY 10027 eric.neyman@columbia.edu Tim Roughgarden Columbia University New York, NY 10027 tim.roughgarden@gmail.com
Pseudocode Yes ALGORITHM 1: OMD algorithm for learning weights for logarithmic pooling
Open Source Code No The paper does not provide any links to open-source code or an explicit statement about releasing the code for the described methodology.
Open Datasets No This is a theoretical paper and does not involve the use of datasets for training or evaluation.
Dataset Splits No This is a theoretical paper and does not involve data splits for training, validation, or testing.
Hardware Specification No This is a theoretical paper and does not describe any specific hardware used for experiments.
Software Dependencies No This is a theoretical paper and does not describe specific software dependencies with version numbers for experimental reproduction.
Experiment Setup No This is a theoretical paper and does not detail an experimental setup with hyperparameters or system-level training settings.