Data-Dependent Bounds for Online Portfolio Selection Without Lipschitzness and Smoothness

Authors: Chung-En Tsai, Ying-Ting Lin, Yen-Huan Li

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This work introduces the first small-loss and gradual-variation regret bounds for online portfolio selection, marking the first instances of data-dependent bounds for online convex optimization with non-Lipschitz, non-smooth losses. The algorithms we propose exhibit sublinear regret rates in the worst cases and achieve logarithmic regrets when the data is easy, with per-round time almost linear in the number of investment alternatives. The regret bounds are derived using novel smoothness characterizations of the logarithmic loss, a local norm-based analysis of following the regularized leader (FTRL) with self-concordant regularizers, which are not necessarily barriers, and an implicit variant of optimistic FTRL with the log-barrier.
Researcher Affiliation Academia Chung-En Tsai Department of Computer Science and Information Engineering National Taiwan University CHUNGENTSAI@NTU.EDU.TW
Pseudocode Yes Algorithm 1 Optimistic FTRL for Online Linear Optimization
Open Source Code No The paper does not provide any statement or link indicating that the source code for its methodology is publicly available.
Open Datasets No The paper is theoretical and focuses on mathematical analysis and algorithm design rather than empirical evaluation on specific datasets. It does not mention using any publicly available or open datasets for training or evaluation.
Dataset Splits No The paper is theoretical and does not report on empirical experiments. Consequently, it does not provide details about training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not report on empirical experiments. Consequently, it does not specify any hardware details (like GPU or CPU models) used for running experiments.
Software Dependencies No The paper is theoretical and does not report on empirical experiments. Therefore, it does not provide specific software dependencies with version numbers needed to replicate any experimental setup.
Experiment Setup No The paper is theoretical and focuses on mathematical derivations and algorithm design. It does not describe an experimental setup with specific hyperparameter values or system-level training settings.