Robust Multi-Objective Bayesian Optimization Under Input Noise

Authors: Samuel Daulton, Sait Cakmak, Maximilian Balandat, Michael A. Osborne, Enlu Zhou, Eytan Bakshy

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we find that our approach significantly outperforms alternative methods and efficiently identifies optimal robust designs that will satisfy specifications across multiple metrics with high probability.
Researcher Affiliation Collaboration 1Meta 2University of Oxford 3Georgia Institute of Technology. Correspondence to: Samuel Daulton <sdaulton@fb.com>, Sait Cakmak <saitcakmak@fb.com>.
Pseudocode No The paper describes algorithms and methods in textual form but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is open-sourced at github.com/ facebookresearch/robust_mobo.
Open Datasets Yes Gaussian Mixture Model (GMM) (d = 2, M = 2, α = 0.9): This is a variant of the GMM problem from Fr ohlich et al. (2020)... Constrained Branin Currin (d = 2, M = 2, V = 1, α = 0.7): We subject this problem, which originates from Daulton et al. (2020)... Disc Brake (d = 4, M = 2, V = 4, α = 0.95): In this disc brake manufacturing problem... (Ray and Liew, 2002). Penicillin Production (d = 7, M = 3, α = 0.8): This problem considers optimizing the manufacturing process of penicillin (Liang and Lai, 2021).
Dataset Splits No The paper describes the number of function evaluations for Bayesian Optimization but does not specify explicit training/validation/test dataset splits for the black-box optimization problems themselves.
Hardware Specification Yes The experiments were timed on a shared cluster using 4 CPU cores, 1 GPU, and 16 GB of RAM.
Software Dependencies Yes We implemented all methods using the Bo Torch library (Balandat et al., 2020) (except for NSGA-II), leveraging the existing implementations of NEI and q NEHVI available at https://github.com/pytorch/botorch. We used the implementation of NSGA-II in the Py MOO library (Blank and Deb, 2020), which is available at https://github. com/anyoptimization/pymoo.
Experiment Setup Yes For all BO-methods, we begin by evaluating 2(d + 1) design points from a scrambled Sobol sequence. We use nξ = 32 samples... For all model-based methods, we model each objective and constraint with an independent GP with a Mat ern5 2 ARD kernel (Rasmussen, 2004). For all MC-based acquisition functions, we use NMC = 256 QMC samples from the GP posterior... For RFF-based methods, the approximate GP sample (using 512 random features)... We optimize all acquisition functions using multi-start optimization with L-BFGS-B (Zhu et al., 1997).