Diversity-Guided Multi-Objective Bayesian Optimization With Batch Evaluations

Authors: Mina Konakovic Lukovic, Yunsheng Tian, Wojciech Matusik

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic test functions and real-world benchmark problems show that our algorithm predominantly outperforms relevant state-of-the-art methods.
Researcher Affiliation Academia Mina Konakovi c Lukovi c MIT CSAIL minakl@mit.edu Yunsheng Tian MIT CSAIL yunsheng@csail.mit.edu Wojciech Matusik MIT CSAIL wojciech@csail.mit.edu
Pseudocode Yes Algorithm 1 DGEMO ... Algorithm 2 Batch Selection Algorithm
Open Source Code Yes The code is available at https://github.com/yunshengtian/DGEMO. ... Our code will be released open-source with reproducibility guarantee.
Open Datasets Yes First, we conduct experiments on 13 synthetic multi-objective test functions including ZDT1-3 [54], DTLZ1-6 [10], OKA1-2 [34], VLMOP2-3 [48], which are widely used in previous literature. ... Second, we adopt 7 real-world engineering design problems presented in RE problem suite [46], which are: four bar truss design, reinforced concrete beam design, hatch cover design, welded beam design, disc brake design, gear train design, and rocket injector design.
Dataset Splits No The paper mentions 'initial samples' and 'batch size' for its Bayesian optimization approach ('50 initial samples', 'batch of 10 samples'), but it does not specify traditional train/validation/test dataset splits with percentages or sample counts for the benchmark problems.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud computing instance types used for running the experiments.
Software Dependencies No The paper mentions its implementation is 'built upon pymoo [5], a state-of-the-art Python framework', but it does not specify version numbers for Python, pymoo, or any other software dependencies.
Experiment Setup Yes For every algorithm, we run every experiment with 10 different random seeds and the same 50 initial samples. In these experiments, we use a batch of 10 samples in each iteration and ran 20 iterations in total. All hyperparameters used are presented in Appendix B.