Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Relaxed Gaussian Process Interpolation: a Goal-Oriented Approach to Bayesian Optimization

Authors: Sébastien J. Petit, Julien Bect, Emmanuel Vazquez

JMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments indicate that using re GP instead of stationary GP models in Bayesian optimization is beneficial. In this section, we run numerical experiments to demonstrate the interest of using EGO-R instead of EGO for minimization problems. The optimization algorithms are tested against a benchmark of test functions from Surjanovic and Bingham (2013) summarized in Table 1, with nrep = 100 (random) repetitions, and a budget of ntot = 300 evaluations for each repetition.
Researcher Affiliation Academia S ebastien J. Petit EMAIL Laboratoire National de M etrologie et d Essais, 78197, Trappes Cedex, France Julien Bect EMAIL Universit e Paris-Saclay, CNRS, Centrale Sup elec, Laboratoire des signaux et syst emes, 91190, Gif-sur-Yvette, France Emmanuel Vazquez EMAIL Universit e Paris-Saclay, CNRS, Centrale Sup elec, Laboratoire des signaux et syst emes, 91190, Gif-sur-Yvette, France
Pseudocode Yes Algorithm 1 re GP with automatic selection of the relaxation range. Input: Data (xn, zn); a range of interest Q; and a list R\Q = R(0) \dots R(G \_ 1) = /0 of relaxation range candidates. for g = 0 to G \_ 1 do Obtain b\theta (g) n and z(g) n by solving (14) with R(g) Compute Jn(R(g)) with Q, b\theta (g) n , and z(g) n using (26) end for Output: The pair b\theta (g) n , z(g) n that minimizes (26).
Open Source Code Yes An open source implementation of the re GP method and the numerical experiments is available online at https://github.com/relaxed GP/regp_paper_experiments.
Open Datasets Yes The optimization algorithms are tested against a benchmark of test functions from Surjanovic and Bingham (2013) summarized in Table 1, with nrep = 100 (random) repetitions, and a budget of ntot = 300 evaluations for each repetition.
Dataset Splits No The paper describes Bayesian optimization, which involves sequentially selecting evaluation points rather than using pre-defined dataset splits. It mentions an 'initial design of size n0 = 10d' and 'a budget of ntot = 300 evaluations', which refer to the number of sequentially acquired evaluation points, not a conventional train/test/validation split of a static dataset.
Hardware Specification No No specific hardware details such as GPU/CPU models, processors, or memory specifications are provided for the experiments.
Software Dependencies No The paper mentions 'The Sci Py implementation was used with default parameters' and refers to a 'sequential Monte Carlo approach' but does not specify version numbers for SciPy or any other software dependencies.
Experiment Setup Yes For all four algorithms, we use an initial design of size n0 = 10d, and we consider GPs with a constant mean function and a Mat ern covariance function with regularity \nu = 5/2. The maximization of the sampling criteria (5) and (27) is performed using a sequential Monte Carlo approach (Benassi et al., 2012; Feliot et al., 2017). The optimization algorithms are tested against a benchmark of test functions from Surjanovic and Bingham (2013) summarized in Table 1, with nrep = 100 (random) repetitions, and a budget of ntot = 300 evaluations for each repetition. ... In this article, the numerical experiments were conducted with \alpha = 0.25. ... (with G = 10 in the experiments below). When applied to UCB, all re GP variants outperform the standard UCB algorithm on Beale, Sixhump Camel, Dixon-Price (4) and (10), Goldstein-Price, all instances of Perm and Rosenbrock, Three-hump Camel, Zakharov (4) and (6).