Fast Information-theoretic Bayesian Optimisation

Authors: Binxin Ru, Michael A. Osborne, Mark Mcleod, Diego Granziol

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate empirically that FITBO inherits the performance associated with informationtheoretic Bayesian optimisation, while being even faster than simpler Bayesian optimisation approaches, such as Expected Improvement. We conduct a series of experiments to test the empirical performance of FITBO and compare it with other popular acquisition functions.
Researcher Affiliation Collaboration 1Department of Engineering Science, University of Oxford, Oxford, UK 2Mind Foundry Ltd., Oxford.
Pseudocode Yes Algorithm 1 FITBO acquisition function
Open Source Code Yes Our Matlab code for FITBO will be available at https: //github.com/rubinxin/FITBO.
Open Datasets Yes We perform optimisation tasks on three challenging benchmark functions: Branin (defined in [0, 1]2), Eggholder (defined in [0, 1]2) and Hartmann (defined in [0, 1]6)...Boston housing dataset (Bache and Lichman, 2013)...validation set of the MNIST dataset (Le Cun et al., 1998)...breast cancer dataset (Bache and Lichman, 2013).
Dataset Splits Yes The dataset is randomly partitioned into train/validation/test sets...We compute the median IR and the median L 2 over 40 random initialisations.
Hardware Specification Yes All the timing tests were performed exclusively on a 2.3 GHz Intel Core i5.
Software Dependencies No The paper mentions 'Matlab code' but does not provide specific version numbers for Matlab or any other software dependencies.
Experiment Setup Yes In all tests, we set the observation noise to σ2 n = 10 3 and resample all the hyperparameters after each function evaluation. We initialise all Bayesian optimisation algorithms with 3 random observation data and set the observation noise to σ2 n = 10 3. All experiments are repeated 40 times.