BOCK : Bayesian Optimization with Cylindrical Kernels

Authors: ChangYong Oh, Efstratios Gavves, Max Welling

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate BOCK extensively, showing that it is not only more accurate and efficient, but it also scales successfully to problems with a dimensionality as high as 500.
Researcher Affiliation Academia 1QUv A Lab, Informatic Institute, University of Amsterdam, Amsterdam, Netherlands 2Canadian Institute for Advanced Research, Toronto, Canada.
Pseudocode Yes Algorithm 2 Bayesian Optimization pipeline.
Open Source Code Yes The implementation is available online (https://github.com/Chang Yong Oh/Hyper Sphere)
Open Datasets Yes train (first 45000 images of MNIST train data set), validation (next 5000 images of MNIST train data set) and test (10000 images of MNIST test data set).
Dataset Splits Yes In this experiment, we split the data set into train (first 45000 images of MNIST train data set), validation (next 5000 images of MNIST train data set) and test (10000 images of MNIST test data set).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions software components and optimizers like 'Adam' and 'Spearmint' but does not specify version numbers for reproducibility.
Experiment Setup Yes For the acquisition function, we use the Adam (Kingma & Ba, 2014) optimizer, instead of LBFGS-B (Zhu et al., 1997). To begin the optimization we feed 20 initial points to Adam. We solve the benchmark functions in 20 and 100 dimensions, using 200 and 600 function evaluations respectively for all Bayesian Optimization methods. For BOCK with P = 3. Given the candidate p we run 100 epochs of SGD on the training set and repeat with an annealed learning rate (0.01 for 50 epochs, then 0.001 for 50 more).