Boundary Exploration for Bayesian Optimization With Unknown Physical Constraints

Authors: Yunsheng Tian, Ane Zuniga, Xinwei Zhang, Johannes P. Dürholt, Payel Das, Jie Chen, Wojciech Matusik, Mina Konakovic Lukovic

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method demonstrates superior performance against stateof-the-art methods through comprehensive experiments on synthetic and real-world benchmarks.
Researcher Affiliation Collaboration 1MIT CSAIL, USA 2MIT-IBM Watson AI Lab, IBM Research, USA 3Evonik Operations Gmb H, Germany. Correspondence to: Yunsheng Tian <yunsheng@csail.mit.edu>, Mina Konakovi c Lukovi c <minakl@mit.edu>.
Pseudocode No The paper describes the proposed method step-by-step in paragraphs but does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code available at: https://github.com/ yunshengtian/BE-CBO
Open Datasets Yes Our benchmark includes three synthetic test functions and nine real-world engineering design problems: 2D Townsend function (Townsend, 2014), 2D Simionescu function (Simionescu, 2014), 2D LSQ function (Gramacy et al., 2016), 2D three-bar truss design (Ray & Saini, 2001), 3D tension-compression string design (Hedar et al., 2006), 4D welded beam design (Hedar et al., 2006), 4D gas transmission compressor design (Pant et al., 2009), 4D pressure vessel design (Coello & Montes, 2002), 7D speed reducer design (Lemonge et al., 2010), 9D planetary gear train design (Rao et al., 2012), 10D rolling element bearing design (Gupta et al., 2007), and 30D cantilever beam design (Cheng et al., 2018). Please refer to Appendix B.1 for more detailed descriptions.
Dataset Splits No The paper describes using initial random samples and total evaluations, and averaging results over random seeds, but does not explicitly define training/validation/test splits or a specific validation methodology for the dataset.
Hardware Specification Yes The experiments are conducted in parallel on a distributed server with Intel Xeon Platinum 8260 CPUs with 4GB RAM per core, where each individual experiment runs on a single CPU thread without GPU.
Software Dependencies No The paper mentions software like Bo Torch, GPy Torch, Sci Py, and Adam optimizer, but does not specify their version numbers.
Experiment Setup Yes We implement an ensemble of Multi-layer Perceptrons (MLPs) for modeling the unknown constraints. For each MLP in the ensemble, we use a simple and standard structure of 4 fully connected layers with 64 log2(d) neurons in each hidden layer where d is the problem dimension. ... The ensemble is optimized for maximal marginal log likelihood using the Adam optimizer with a 3 10 4 learning rate for 1,000 iterations.