Principled Bayesian Optimization in Collaboration with Human Experts

Authors: Wenjie Xu, Masaki Adachi, Colin Jones, Michael A Osborne

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Real-world applications empirically demonstrate that our method not only outperforms existing baselines, but also maintains robustness despite varying labelling accuracy, in tasks of battery design with human experts. and Real-world contribution: empirically, our algorithm provides both fast convergence and resilience against erroneous inputs. It outperformed existing methods in both popular synthetic, and new real-world, tasks in designing lithium-ion batteries.
Researcher Affiliation Collaboration 1 Automatic Control Laboratory, EPFL 2 Urban Energy Systems Laboratory, Empa 3Machine Learning Research Group, University of Oxford 4 Toyota Motor Corporation
Pseudocode Yes Algorithm 1 COllaborative Bayesian Optimization with Labelling Experts (COBOL).
Open Source Code Yes More details for reproducing results are available on Git Hub.4 https://github.com/ma921/COBOL/
Open Datasets Yes using the 4-dimensional Ackley function [1]... five common synthetic functions [89]... open dataset and fitted functions to interpolate between data points... Using the dataset from [26], we fitted the Casteel-Amis equation...
Dataset Splits No The paper describes an iterative optimization process where data points are sequentially queried, and experiments are repeated with different initial datasets. It does not specify explicit train/validation/test dataset splits with percentages or sample counts for reproducibility.
Hardware Specification Yes All experiments were conducted on a laptop PC.5 Mac Book Pro 2019, 2.4 GHz 8-Core Intel Core i9, 64 GB 2667 MHz DDR4
Software Dependencies No The GP hyperparameters were tuned by maximising the marginal likelihood on observed datasets using a multi-start L-BFGS-B method [53] (the default Bo Torch optimiser [14]). The constrained optimisation in Prob. (6) was solved using the interior-point nonlinear optimiser IPOPT [95], which is highly scalable for solving the primal problem, via the symbolic interface Cas ADi [8]... The models were implemented in GPy Torch [31]. No version numbers specified.
Experiment Setup Yes We employed an ARD RBF kernel for both f and g... The initial datasets consisted of three random data points sampled uniformly from within the domain, and in each iteration, one data point was queried. Additionally, we collected initial expert labels by asking an expert to label accept (= 0) or reject (= 1) for 10 uniformly random points. All experiments were repeated ten times with different initial datasets and random seeds... The GP hyperparameters were tuned by maximising the marginal likelihood on observed datasets using a multi-start L-BFGS-B method [53] (the default Bo Torch optimiser [14])... Other hyperparameters were set as η = 3, λ0 = 1, and gthr = 0.1 by default throughout the experiments... Table 3: The complete list of hyperparameters and their settings.