Affine-Invariant Online Optimization and the Low-rank Experts Problem

Authors: Tomer Koren, Roi Livni

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present a new affine-invariant optimization algorithm called Online Lazy Newton. The regret of Online Lazy Newton is independent of conditioning: the algorithm s performance depends on the best possible preconditioning of the problem in retrospect and on its intrinsic dimensionality. As an application, we show how Online Lazy Newton can be used to achieve an optimal regret of order r T for the low-rank experts problem, improving by a r factor over the previously best known bound and resolving an open problem posed by Hazan et al. [15].
Researcher Affiliation Collaboration Tomer Koren Google Brain 1600 Amphitheatre Pkwy Mountain View, CA 94043 tkoren@google.com Roi Livni Princeton University 35 Olden St. Princeton, NJ 08540 rlivni@cs.princeton.edu
Pseudocode Yes Algorithm 1 OLN: Online Lazy Newton
Open Source Code No The paper does not provide any statement or link indicating that source code for the described methodology is publicly available.
Open Datasets No This is a theoretical paper focused on algorithm design and proofs of regret bounds. It does not involve training models on datasets, so no dataset availability information is provided.
Dataset Splits No This is a theoretical paper that does not present empirical experiments. Therefore, no dataset split information (training, validation, test) is provided.
Hardware Specification No This is a theoretical paper focused on algorithm design and analysis. It does not discuss any hardware used for experiments.
Software Dependencies No This is a theoretical paper focused on algorithm design and analysis. It does not mention any specific software dependencies or version numbers.
Experiment Setup No This is a theoretical paper and does not describe any empirical experimental setup, hyperparameters, or training configurations.