Joint Composite Latent Space Bayesian Optimization

Authors: Natalie Maus, Zhiyuan Jerry Lin, Maximilian Balandat, Eytan Bakshy

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Jo Co s performance against that of other methods on nine high-dimensional, composite function BO tasks. Specifically, we consider as baselines BO using Deep Kernel Learning (Wilson et al., 2016a) (Vanilla BO w/ DKL), Trust Region Bayesian Optimization (Tu RBO) (Eriksson et al., 2019), CMA-ES (Hansen, 2023), and random sampling. Our results are summarized in Figure 2. Error bars show the standard error of the mean over 40 replicate runs.
Researcher Affiliation Collaboration 1Department of Computer and Information Science, University of Pennsylvania 2Meta. Correspondence to: Natalie Maus <nmaus@seas.upenn.edu>.
Pseudocode Yes Algorithm 1 Jo Co
Open Source Code Yes Code to reproduce results is available at https://github. com/nataliemaus/joco_icml24.
Open Datasets Yes Image Net classifier from Torch Vision (Torch Vision maintainers and contributors, 2016).
Dataset Splits No We consider the predictive accuracy on held out data collected during a single optimization trace (using an 80/20 train/test split).
Hardware Specification Yes To produce all results in the paper, we use a cluster of machines consisting of NVIVIA A100 and V100 GPUs.
Software Dependencies No We implement Jo Co leveraging the Bo Torch (Balandat et al., 2020) and GPy Torch (Gardner et al., 2018) open source libraries (both Bo Torch and GPy Torch are released under MIT license).
Experiment Setup Yes We update the models using gradient descent with the Adam optimizer using a learning rate of 0.01 as suggested by the best-performing results in our ablation studies in Appendix A.3.