Black-box optimization of noisy functions with unknown smoothness

Authors: Jean-Bastien Grill, Michal Valko, Remi Munos, Remi Munos

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments We ran experiments on the function plotted in Figure 1 for HOO algorithms with different values of ρ and the POO7 algorithm for ρmax = 0.9. This function, as described in Section 1, has an upper and lower envelope that are not of the same order and therefore has d > 0 for a standard partitioning. In Figure 2, we show the simple regret of the algorithms as function of the number of evaluations.
Researcher Affiliation Collaboration Jean-Bastien Grill Michal Valko Seque L team, INRIA Lille Nord Europe, France jean-bastien.grill@inria.fr michal.valko@inria.fr R emi Munos Google Deep Mind, UK munos@google.com
Pseudocode Yes Algorithm 1 POO Parameters: K, P = {Ph,i} Optional parameters: ρmax, νmax Initialization: Dmax ln K/ ln (1/ρmax) n 0 {number of evaluation performed} N 1 {number of HOO instances} S {(νmax, ρmax)} {set of HOO instances} while computational budget is available do 2Dmax ln (n/(ln n)) do for i 1, . . . , N do {start new HOOs} s νmax, ρmax2N/(2i+1) S S {s} Perform n N function evaluation with HOO(s) Update the average reward bµ[s] of HOO(s) end for n 2n N 2N end while{ensure there is enough HOOs} for s S do Perform a function evaluation with HOO(s) Update the average reward bµ[s] of HOO(s) end for n n + N end while s argmaxs S bµ[s] Output: A random point evaluated by HOO(s )
Open Source Code Yes 7code available at https://sequel.lille.inria.fr/Software/POO
Open Datasets No The paper experiments on a synthetic function described in Figure 1, not a pre-existing or publicly available dataset. No information about public dataset access is provided.
Dataset Splits No The paper discusses function evaluations for an optimization problem and reports 'simple regret' based on a total number of evaluations (e.g., 500 or 5000 evaluations), but it does not refer to standard training, validation, or testing dataset splits.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not list any specific software dependencies or their version numbers (e.g., programming languages, libraries, frameworks) used for the experiments.
Experiment Setup Yes We ran experiments on the function plotted in Figure 1 for HOO algorithms with different values of ρ and the POO7 algorithm for ρmax = 0.9. This function, as described in Section 1, has an upper and lower envelope that are not of the same order and therefore has d > 0 for a standard partitioning. In Figure 2, we show the simple regret of the algorithms as function of the number of evaluations. [...] We compare the algorithms according to the expected simple regret, which is the difference between the optimum and the expected value of function value at the point they return. We compute it as the average of the value of the function for all evaluated points.