The Gain from Ordering in Online Learning

Authors: Vasilis Kontonis, Mingchen Ma, Christos Tzamos

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our work presents novel results both computational and information-theoretic. Our first result shows that approximating the optimal self-directed regret within a d1/poly log log d factor is ETH-hard under arbitrary datasets. Our second result yields a novel characterization of the Gain of Ordering for data that are uniformly distributed on the sphere and gives an efficient approximation algorithm that bypasses the aforementioned hardness result and achieves close to optimal regret by exploiting the structure of the data.
Researcher Affiliation Academia Vasilis Kontonis UT Austin vasilis@cs.utexas.edu Mingchen Ma UW-Madison mingchen@cs.wisc.edu Christos Tzamos UW-Madison tzamos@wisc.edu
Pseudocode Yes Algorithm 1 SELFDIRECTEDLINEARREGRESSION(X) ... Algorithm 2 SELFDIRECTEDRELUREGRESSION(X)
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No The paper describes theoretical datasets, such as 'a dataset X with n examples in d dimensions' or 'X is drawn i.i.d. from Sd', but it does not refer to a specific named public dataset or provide access information for any dataset used in experiments.
Dataset Splits No The paper does not describe any empirical experiments or dataset splits (training, validation, test) for reproducibility.
Hardware Specification No The paper is theoretical and does not describe any hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers for experimental setup.
Experiment Setup No The paper is theoretical and does not provide specific experimental setup details, hyperparameters, or training configurations.