Online Learning for Adversaries with Memory: Price of Past Mistakes

Authors: Oren Anava, Elad Hazan, Shie Mannor

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work we extend the notion of learning with memory to the general Online Convex Optimization (OCO) framework, and present two algorithms that attain low regret. The first algorithm applies to Lipschitz continuous loss functions, obtaining optimal regret bounds for both convex and strongly convex losses. The second algorithm attains the optimal regret bounds and applies more broadly to convex losses without requiring Lipschitz continuity, yet is more complicated to implement. We complement the theoretical results with two applications: statistical arbitrage in finance, and multi-step ahead prediction in statistics.
Researcher Affiliation Academia Oren Anava Technion Haifa, Israel oanava@tx.technion.ac.il Elad Hazan Princeton University New York, USA ehazan@cs.princeton.edu Shie Mannor Technion Haifa, Israel shie@ee.technion.ac.il
Pseudocode Yes Algorithm 1 1: Input: learning rate η > 0, σ-strongly convex and smooth regularization function R(x). 2: Choose x0, . . . , xm K arbitrarily. 3: for t = m to T do 4: Play xt and suffer loss ft(xt m, . . . , xt). 5: Set xt+1 = arg minx K n η Pt τ=m fτ(x) + R(x) o
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets No The paper is theoretical and does not describe actual experiments that involve training on a specific dataset. The applications sections (5 and 6) describe how the proposed algorithms could be used, but do not involve empirical training or dataset usage.
Dataset Splits No The paper is theoretical and does not involve empirical validation or dataset splits.
Hardware Specification No The paper is theoretical and does not describe running experiments, hence no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe running experiments, hence no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not describe running experiments, hence no experimental setup details like hyperparameters or training configurations are provided.