Online Learning with Sublinear Best-Action Queries

Authors: Matteo Russo, Andrea Celli, Riccardo Colini Baldeschi, Federico Fusco, Daniel Haimovich, Dima Karamshuk, Stefano Leonardi, Niek Tax

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical The paper does not contain any experimental results.
Researcher Affiliation Collaboration Matteo Russo Sapienza University of Rome, Italy mrusso@diag.uniroma1.it Andrea Celli Bocconi University, Italy andrea.celli2@unibocconi.it Riccardo Colini-Baldeschi Meta, Central Applied Science, UK rickuz@meta.com Federico Fusco Sapienza University of Rome, Italy fuscof@diag.uniroma1.it Daniel Haimovich Meta, Central Applied Science, UK danielha@meta.com Dima Karamshuk Meta, Central Applied Science, UK karamshuk@meta.com Stefano Leonardi Sapienza University of Rome, Italy leonardi@diag.uniroma1.it Niek Tax Meta, Central Applied Science, UK niek@meta.com
Pseudocode Yes Algorithm 1 Hedge with Best-Action Queries
Open Source Code No The paper does not make any explicit statement about releasing source code for the methodology described. The NeurIPS checklist indicates 'NA' for questions related to code availability because the paper does not contain experimental results that would require code.
Open Datasets No The paper is theoretical in nature and does not conduct experiments that involve training on a specific dataset. Therefore, it does not provide information about publicly available or open datasets for training.
Dataset Splits No The paper is theoretical and does not conduct experiments that would require training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not contain experimental results, thus no hardware specifications for running experiments are provided.
Software Dependencies No The paper is theoretical and does not describe experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not report experimental setups, hyperparameters, or training configurations.