Fully Unconstrained Online Learning

Authors: Ashok Cutkosky, Zak Mhammedi

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper provides new algorithms for online learning, which is a standard framework for the design and analysis of iterative first-order optimization algorithms used throughout machine learning. Specifically, we consider a variant of online learning often called online convex optimization [1, 2]. Formally, an online learning algorithm is designed to play a kind of game between the learning algorithm and the environment, which we can describe using the following protocol: Protocol 1. Online Learning/Online Convex Optimization. Input: Convex domain W Rd, number of rounds T:
Researcher Affiliation Collaboration Ashok Cutkosky Boston University ashok@cutkosky.com Zakaria Mhammedi Google Research mhammedi@google.com
Pseudocode Yes Algorithm 1 Reduction From General W to R Algorithm 2 Algorithm for Protocol 2 (REG) Algorithm 3 1-Dimensional Learner for Protocol 4 (BASE) Algorithm 4 Fully Unconstrained Learning in One Dimension Algorithm 5 Fully Unconstrained Learning Algorithm 6 Regularized 1-dimensional learner (REG) for Protocol 2
Open Source Code No This paper has only mathematical congtent. There are no experiments in this paper.
Open Datasets No This paper has only mathematical congtent. There are no experiments in this paper.
Dataset Splits No This paper has only mathematical congtent. There are no experiments in this paper.
Hardware Specification No This paper has only mathematical congtent. There are no experiments in this paper.
Software Dependencies No This paper has only mathematical congtent. There are no experiments in this paper.
Experiment Setup No This paper has only mathematical congtent. There are no experiments in this paper.