Exploiting easy data in online optimization
Authors: Amir Sani, Gergely Neu, Alessandro Lazaric
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we provide numerical simulations in the setting of prediction with expert advice with comparisons to the state of the art. |
| Researcher Affiliation | Academia | Amir Sani Gergely Neu Alessandro Lazaric Seque L team, INRIA Lille Nord Europe, France {amir.sani,gergely.neu,alessandro.lazaric}@inria.fr |
| Pseudocode | Yes | Figure 2: (A, B)-PROD |
| Open Source Code | No | The paper does not provide any explicit statements about making its source code publicly available, nor does it provide links to a code repository. |
| Open Datasets | No | We consider the two-expert settings defined by de Rooij et al. [8] where deterministic loss sequences of T = 2000 steps are designed to obtain different configurations. (We refer to [8] for a detailed specification of the settings.) |
| Dataset Splits | No | The paper discusses online optimization, where performance is measured by regret over time, rather than using traditional training, validation, and test dataset splits common in supervised machine learning. No specific dataset split information is provided. |
| Hardware Specification | No | The paper does not specify any hardware used for running the experiments (e.g., CPU, GPU, memory, or cloud resources). |
| Software Dependencies | No | The paper does not provide specific software names with version numbers (e.g., programming languages, libraries, or frameworks with their versions). |
| Experiment Setup | Yes | Input: learning rate η (0, 1/2], initial weights {w1,A, w1,B}, num. of rounds T; [...] Corollary 1. Let C 1 be an upper bound on the total benchmark loss LT (B). Then setting η = 1/2 (log C)/C < 1/2 and w1,B = 1 w1,A = 1 η simultaneously guarantees... |