Predictive Entropy Search for Efficient Global Optimization of Black-box Functions

Authors: José Miguel Hernández-Lobato, Matthew W Hoffman, Zoubin Ghahramani

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate PES in both synthetic and real-world applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance.
Researcher Affiliation Academia Jos e Miguel Hern andez-Lobato jmh233@cam.ac.uk University of Cambridge
Pseudocode Yes Algorithm 1 Generic Bayesian optimization
Open Source Code Yes The code for all these operations is publicly available at http://jmhl.org.
Open Datasets Yes The first one (NNet) returns the predictive accuracy of a neural network on a random train/test partition of the Boston Housing dataset [3]. [3] K. Bache and M. Lichman. UCI machine learning repository, 2013.
Dataset Splits No The paper mentions 'random train/test partition' but does not specify exact percentages or a validation split. It uses 'validation' in the context of mathematical terms but not for dataset splitting.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details with version numbers.
Experiment Setup Yes In our experiments, we use Gaussian process priors for f with squared-exponential kernels k(x, x ) = γ2 exp{ 0.5 P i(xi x i)2/ℓ2 i }. The corresponding spectral density is zero-mean Gaussian with covariance given by diag([ℓ 2 i ]) and normalizing constant α = γ2. The model hyperparameters are {γ, ℓ1, . . . , ℓd, σ2}. We use broad, uninformative Gamma hyperpriors.