Gaussian Process Optimization with Mutual Information
Authors: Emile Contal, Vianney Perchet, Nicolas Vayatis
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We confirm the efficiency of this algorithm on synthetic and real tasks against the natural competitor, GP-UCB, and also the Expected Improvement heuristic. Figure 3. Empirical mean and confidence interval of the average regret RT T in term of iteration T on real and synthetic tasks for the GP-MI and GP-UCB algorithms and the EI heuristic (lower is better). |
| Researcher Affiliation | Academia | CMLA, UMR CNRS 8536, ENS Cachan, France LPMA, Universit e Paris Diderot, France |
| Pseudocode | Yes | Algorithm 1 GP-MI Algorithm 2 Generic Optimization Scheme (φt) |
| Open Source Code | No | The paper does not provide any specific links to source code repositories or explicitly state that the code for the methodology is released. |
| Open Datasets | Yes | Recent post-tsunami survey data as well as the numerical simulations of (Hill et al., 2012) have shown that in some cases the run-up... Motivated by these observations (Stefanakis et al., 2012) investigated this phenomenon by employing numerical simulations using the VOLNA code (Dutykh et al., 2011)... In the study of (Stefanakis et al., 2013) the setup was controlled by five physical parameters... The Mackey-Glass delay-differential equation... It has been used as a benchmark for example by (Flake & Lawrence, 2002). The Branin or Branin-Hoo function is a common benchmark function for global optimization. The Goldstein & Price function is an other benchmark function for global optimization... |
| Dataset Splits | No | We first picked the half of the data set to estimate the hyperparameters of the kernel via cross validation in this subset. This describes a cross-validation strategy for hyperparameter estimation but does not detail explicit train/validation/test splits for the main model evaluation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | For all data sets and algorithms the learners were initialized with a random subset of 10 observations {(xi, yi)}i 10. The value of the parameter δ for the GP-MI and the GP-UCB algorithms was fixed to δ = 10 6 for all these experimental tasks. |