HypBO: Accelerating Black-Box Scientific Experiments Using Experts’ Hypotheses
Authors: Abdoulatif Cissé, Xenophon Evangelopoulos, Sam Carruthers, Vladimir V. Gusev, Andrew I. Cooper
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the performance of our method on a range of synthetic functions and demonstrate its practical utility on a real chemical design task where the use of expert hypotheses accelerates the search performance significantly. |
| Researcher Affiliation | Academia | 1Department of Chemistry, University of Liverpool, England, UK 2Leverhulme Research Centre for Functional Materials Design, University of Liverpool, England, UK 3Department of Computer Science, University of Liverpool, England, UK {abdoulatif.cisse, evangx, sgscarru, vladimir.gusev, aicooper}@liverpool.ac.uk |
| Pseudocode | Yes | Algorithm 1 Hypothesis Bayesian Optimization (Hyp BO) |
| Open Source Code | Yes | Reproducibility details are available in the SM and the source code can be found at https://github.com/Ablatif6c/Hyp BO. |
| Open Datasets | Yes | We fitted this model against a total ground truth dataset of 1119 experimental observations supplied by the authors of [Burger et al., 2020]. |
| Dataset Splits | No | The paper describes experiments on synthetic functions and a simulated chemical space, but does not provide explicit details on training, validation, or test dataset splits (e.g., percentages or counts) for the BO problems themselves. The GPR model for the chemical space was 'fitted against a total ground truth dataset of 1119 experimental observations' without specifying explicit splits for that fitting process. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as library names with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8'), needed to replicate the experiment. |
| Experiment Setup | Yes | For all experiments, we use preset hyperparameters for Hyp BO. We set the lower level limit lmax to 2, the upper level limit umax to 5, the number of locally optimal samples T to 1, and the growth rate γ to 0. All experiments are warm-started with five initial points except for the photocatalytic hydrogen production experiment with mixed hypotheses, whose initial sample count is 10. |