Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization

Authors: Richard Zhang, Daniel Golovin

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate our theoretical contributions by running our multiobjective algorithms with hypervolume scalarizations on the Black-Box Optimization Benchmark (BBOB) functions, which can be used for bi-objective optimization problems (Tušar et al., 2016). We see that our multi-objective Bayesian optimization algorithms, which admit strong regret bounds, consistently outperforms the multi-objective evolutionary algorithms. Furthermore, we observe the superior performance of the hypervolume scalarization functions over other scalarizations, although that difference is less pronounced when the Pareto frontier is even somewhat convex.
Researcher Affiliation Industry 1Google Brain, Pittsburgh, Pennsylvania, USA. Correspondence to: Richard Zhang <qiuyiz@google.com>.
Pseudocode Yes Algorithm 1 Scalarization for Multi-Objective Bayesian Optimization and Algorithm 2 Scalarization with General Single-Objective Optimization
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes We empirically validate our theoretical contributions by running our multiobjective algorithms with hypervolume scalarizations on the Black-Box Optimization Benchmark (BBOB) functions, which can be used for bi-objective optimization problems (Tušar et al., 2016).
Dataset Splits No The paper uses Black-Box Optimization Benchmark (BBOB) functions, which are used for optimization problems rather than as traditional datasets with explicit train/validation/test splits. Therefore, no specific dataset split information is provided.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models or processor types used for running its experiments.
Software Dependencies No The paper mentions methods and algorithms used but does not provide specific software dependencies or library version numbers required for reproduction.
Experiment Setup Yes We run each of our algorithms in dimensions n = 8, 16, 24 and optimize for 70 iterations with 5 repeats. Our algorithms are the Random algorithm, UCB algorithm, and Evolutionary Strategy (ES). Our scalarizations include the linear and hypervolume scalarization with the weight distribution Dλ as S1 +. Note that for brevity, we do not include the Chebyshev scalarization because it is almost a monotonic transformation of the hypervolume scalarization with a different weight distribution. We run the UCB algorithm via an implementation of Algorithm 1 with a constant standard deviation multiplier of 1.8 and a standard Matérn kernel, while we run the ES algorithms using Algorthm 2 with T = 1 and l = 70 by relying on a well-known single-objective evolutionary strategy known as Eagle (Yang & Deb, 2010).