Joint Entropy Search for Multi-Objective Bayesian Optimization

Authors: Ben Tu, Axel Gandy, Nikolas Kantas, Behrang Shafei

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically evaluate the JES acquisition function on a range of synthetic and real-world benchmark problems. We compare this approach with some popular acquisition functions in multi-objective BO: TSEMO [12], Par EGO [51], NPar EGO [19], EHVI [18], NEHVI [19], PES [31, 33] and MES-0 [80]. We present the log HV discrepancy results for both the sequential and batch experiments in Figure 5.
Researcher Affiliation Collaboration Ben Tu Axel Gandy Nikolas Kantas Behrang Shafei Imperial College London BASF SE ben.tu16@imperial.ac.uk
Pseudocode Yes Algorithm 1: Joint Entropy Search (JES).
Open Source Code Yes The complete details of the experiments are outlined in Appendix L, whilst the code is available at https://github.com/benmltu/JES.
Open Datasets Yes Synthetic benchmark. We consider the ZDT2 [22] benchmark with D = 6 inputs and M = 2 objectives. Chemical reaction. This benchmark considers a nucleophilic aromatic substitution reaction (Sn Ar) between 2,4-difluoronitrobenzene and pyrrolidine in ethanol to produce a mixture of a desired product and two side-products [45]. Pharmaceutical manufacturing. This problem is concerned with optimizing the Penicillin production process outlined in [56]. Marine design. This problem considers optimizing a family of bulk carriers subject to the constraints imposed for ships travelling through the Panama Canal [65, 73]. We consider the reformulation in [83], which converts the constraints into another objective.
Dataset Splits No The problem context is Bayesian Optimization, which involves sequential data acquisition rather than pre-defined dataset splits for training, validation, and testing. The paper does not specify fixed percentages or counts for these splits.
Hardware Specification Yes Wall times are reported in seconds and are measured on a MacBook Pro M1 Max.
Software Dependencies No All algorithms are based on the open source Python library Bo Torch [3], which uses features from GPy Torch [30] for Gaussian process regression and Py Torch [66] for automatic differentiation. All experiments are repeated using 100 different initial seeds and we generate the Pareto set recommendation ˆX of 50 points by maximizing the posterior mean using a multi-objective solver (NSGA2 [22] from the Pymoo library [10]). The paper mentions these software packages but does not provide specific version numbers for them.
Experiment Setup Yes All experiments are repeated using 100 different initial seeds and we generate the Pareto set recommendation ˆX of 50 points by maximizing the posterior mean using a multi-objective solver (NSGA2 [22] from the Pymoo library [10]). We corrupt the observations with additive Gaussian noise with zero-mean and standard deviation set to approximately 10% of the objective ranges.