Unconstrained Online Learning with Unbounded Losses

Authors: Andrew Jacobsen, Ashok Cutkosky

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical For this setting we provide an algorithm which guarantees RT (u) e O(G u T) regret on any problem where the subgradients satisfy gt G+L wt , and show that this bound is unimprovable without further assumptions. We leverage this algorithm to develop new saddle-point optimization algorithms that converge in duality gap in unbounded domains, even in the absence of meaningful curvature. Finally, we provide the first algorithm achieving non-trivial dynamic regret in an unbounded domain for non-Lipschitz losses, as well as a matching lower bound.
Researcher Affiliation Academia Andrew Jacobsen 1 2 Ashok Cutkosky 3 1Department of Computing Science, University of Alberta, Edmonton, Canada 2Alberta Machine Intelligence Institute (Amii), Edmonton, Canada 3Department of Electrical and Computer Engineering, Boston University, Boston, Massachussetts. Correspondence to: Andrew Jacobsen <ajjacobs@ualberta.ca>.
Pseudocode Yes Algorithm 1 Algorithm for Quadratically Bounded Losses, Algorithm 2 Saddle-point Reduction, Algorithm 3 Dynamic Regret Algorithm, Algorithm 4 Centered Mirror Descent with Adjustment, Algorithm 5 Multi-scale Fixed-share
Open Source Code No No statement about open-source code release or links to repositories found in the paper.
Open Datasets No The paper is theoretical and does not involve specific datasets or empirical training.
Dataset Splits No The paper is theoretical and does not describe empirical experiments or dataset splits.
Hardware Specification No The paper is theoretical and does not specify any hardware used for experiments.
Software Dependencies No The paper is theoretical and does not specify software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not provide details on experimental setup or hyperparameters.