Efficient Online Portfolio with Logarithmic Regret

Authors: Haipeng Luo, Chen-Yu Wei, Kai Zheng

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study the decades-old problem of online portfolio management and propose the first algorithm with logarithmic regret that is not based on Cover s Universal Portfolio algorithm and admits much faster implementation. Specifically Universal Portfolio enjoys optimal regret O(N ln T) for N financial instruments over T rounds, but requires log-concave sampling and has a large polynomial running time. Our algorithm, on the other hand, ensures a slightly larger but still logarithmic regret of O(N 2(ln T)4), and is based on the well-studied Online Mirror Descent framework with a novel regularizer that can be implemented via standard optimization methods in time O(TN 2.5) per round. The regret of all other existing works is either polynomial in T or has a potentially unbounded factor such as the inverse of the smallest price relative.
Researcher Affiliation Academia Haipeng Luo Department of Computer Science University of Southern California haipengl@usc.edu Chen-Yu Wei Department of Computer Science University of Southern California chenyu.wei@usc.edu Kai Zheng Key Laboratory of Machine Perception, MOE, School of EECS, Peking University Center for Data Science, Peking University, Beijing Institute of Big Data Research zhengk92@pku.edu.cn
Pseudocode Yes Algorithm 1: BARrier-Regularized Online Newton Step (BARRONS) and Algorithm 2: ADA-BARRONS
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of their described methodology.
Open Datasets No The paper does not mention the use of any specific dataset that is publicly available or open.
Dataset Splits No The paper does not provide any information about training, validation, or test dataset splits.
Hardware Specification No The paper discusses theoretical computational complexity but does not specify any hardware (e.g., CPU/GPU models, memory, or cloud instances) used for running experiments.
Software Dependencies No The paper refers to the 'Interior Point Method' as a general optimization technique but does not list specific software dependencies with version numbers (e.g., libraries, frameworks, or solvers).
Experiment Setup No The paper is theoretical and does not describe an experimental setup including concrete hyperparameter values or training configurations for empirical validation.