Optimistic optimization of a Brownian
Authors: Jean-Bastien Grill, Michal Valko, Remi Munos
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide an algorithm with sample complexity of order log2(1/ε). This improves over previous results of Al-Mharmah and Calvin (1996) and Calvin et al. (2017) which provided only polynomial rates. Our algorithm is adaptive each query depends on previous values and is an instance of the optimism-in-the-face-of-uncertainty principle. ... For an illustration, we ran a simple experiment and for different values of ε, we computed the average empirical sample complexity Nε on 250 independent runs that you can see on the left plot. We also plot one point for each run of OOB instead of averaging the sample complexity, to be seen on the right. |
| Researcher Affiliation | Collaboration | Jean-Bastien Grill Michal Valko R emi Munos Seque L team, INRIA Lille Nord Europe, France and Deep Mind Paris, France jbgrill@google.com michal.valko@inria.fr munos@google.com |
| Pseudocode | Yes | Algorithm 1 reveals its pseudo-code. |
| Open Source Code | No | The paper does not provide any concrete access (e.g., repository link or explicit statement of code release) to the source code for the methodology described. |
| Open Datasets | No | The paper focuses on optimizing a 'random realization W of a Brownian motion' and does not use a pre-existing, publicly available dataset in the traditional sense that would require a link or citation for access. The data is generated as part of the process. |
| Dataset Splits | No | The paper does not explicitly mention training, validation, or test dataset splits. The problem setup involves sequentially selecting query points of a Brownian motion to approximate its maximum with a certain precision, rather than a typical machine learning setup with predefined data splits. |
| Hardware Specification | No | The paper mentions running 'a simple experiment' and computing 'average empirical sample complexity Nε on 250 independent runs' but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for these experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., programming languages, libraries, or specialized solvers) needed to replicate the experiments. |
| Experiment Setup | No | The paper describes the numerical evaluation involved running '250 independent runs' for 'different values of ε' but does not specify other experimental setup details such as hyperparameters, optimization settings, or other system-level configurations commonly found in machine learning experiments. |