Pareto Set Learning for Expensive Multi-Objective Optimization
Authors: Xi Lin, Zhiyuan Yang, Xiaoyuan Zhang, Qingfu Zhang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on different synthetic and real-world problems demonstrate the effectiveness of our proposed method. and MOBO Performance. We compare PSL with other MOBO methods on the performance of evaluated solutions. Figure 5 shows the log hypervolume difference to the true/approximate Pareto front for the synthetic/real-world problems during the optimization process. |
| Researcher Affiliation | Academia | Xi Lin, Zhiyuan Yang, Xiaoyuan Zhang, Qingfu Zhang Department of Computer Science, City University of Hong Kong |
| Pseudocode | Yes | Algorithm 1 PSL with GP Models, Algorithm 2 MOBO with PSL, Algorithm 3 Batch Selection with PSL |
| Open Source Code | Yes | We implement the proposed PSL4 in Pytorch [64]. 4 https://github.com/Xi-L/PSL-MOBO |
| Open Datasets | Yes | The algorithms are first compared on six newly proposed synthetic test instances (see Appendix E.1), as well as the widely-used VLMOP1-3 [88] and DTLZ2 [21] benchmark problems. Then we also conduct experiments on 5 different real-world multi-objective engineering design problems (RE) [85]. |
| Dataset Splits | No | For each experiment, we randomly generate 10 initial solutions for expensive evaluations, and then conduct MOBO with 20 batched evaluations with batch size 5. Therefore, there are total 110 expensive evaluations. (This describes evaluation budget, not dataset splits for train/validation/test sets). |
| Hardware Specification | Yes | The algorithm runtime is computed by running each algorithm for one iteration on a single core CPU with 128GB of RAM. The experiments are run in parallel on an Intel Xeon Gold 6248R CPU cluster. |
| Software Dependencies | No | The paper mentions 'Pytorch', 'pymoo', and 'Bo Torch' as software used for implementation but does not specify their version numbers. |
| Experiment Setup | Yes | Experiment Setting. For each experiment, we randomly generate 10 initial solutions for expensive evaluations, and then conduct MOBO with 20 batched evaluations with batch size 5. [...] In this work, we simply set ρ = 0.001, dynamically update z i as the current best value for each objective and let ε = 0.1|z |. [...] We simply set β = 1/2 [...] where we randomly sample K = 10 different valid preferences [...] The MLP for hθ(λ) consists of two hidden layers with 128 neurons each. We use the Softplus activation function for both hidden layers. [...] we select the set XB in a sequential greedy manner from X where |X| = P = 1, 000 for all problems. |