Asynchronous Batch Bayesian Optimisation with Improved Local Penalisation

Authors: Ahsan Alvi, Binxin Ru, Jan-Peter Calliess, Stephen Roberts, Michael A. Osborne

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate empirically the efficacy of PLAy BOOK and its variants on synthetic tasks and a real-world problem. We undertake a comparison between synchronous and asynchronous BO, and show that asynchronous BO often outperforms synchronous batch BO in both wall-clock time and number of function evaluations.
Researcher Affiliation Collaboration 1Department of Engineering Science, University of Oxford 2Mind Foundry Ltd., Oxford, UK 3Oxford-Man Institute of Quantitative Finance.
Pseudocode No The paper provides mathematical equations and descriptions of the method, but no structured pseudocode or algorithm blocks.
Open Source Code Yes 1 Implementation available at https://github.com/ a5a/asynchronous-BO
Open Datasets Yes We evaluate the performance of the different batch BO strategies using popular global optimisation test functions2. We show results for the Eggholder function defined on R2 (egg2D), the Ackley function defined on R5 and the Michalewicz function defined on R10 (mic-10D). 2Details for these and other challenging global optimisation test functions can be found at https://www.sfu.ca/ ssurjano/optimization.html. We further experimented on a real-world application of tuning the hyperparameters of a 6-layer Convolutional Neural Network (CNN)3 for an image classification task on CIFAR10 dataset (Krizhevsky, 2009).
Dataset Splits Yes We trained the CNN on half of the training set for 20 epochs and each function evaluation returns the validation error of the model.
Hardware Specification No The paper mentions "Computational resources were supported by Arcus HTC and JADE HPC at the University of Oxford and Hartree national computing facilities, UK." but does not provide specific details on CPU, GPU, or memory used for the experiments.
Software Dependencies No The paper states "we implemented all methods in Python using the same packages" but does not specify particular software names with version numbers (e.g., Python version, specific library versions).
Experiment Setup Yes For our PLAy BOOK-H and PLAy BOOK-HL, we choose γ = 1 and p = 5 in the HLP (Eq. (12)). For TS, we use 10,000 sample points for each batch point selection. For the other methods, we evaluate α(x) at 3,000 random locations and then choose the best one after locally optimising the best 5 samples for a small number of local optimisation steps. We trained the CNN on half of the training set for 20 epochs and each function evaluation returns the validation error of the model.