Hypervolume Maximization: A Geometric View of Pareto Set Learning
Authors: Xiaoyuan Zhang, Xi Lin, Bo Xue, Yifan Chen, Qingfu Zhang
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our proposed approach on various benchmark problems and real-world problems, and the encouraging results make it a potentially viable alternative to existing multiobjective algorithms. Code is available at https://github.com/xzhang2523/hvpsl/tree/master. 6 Experiments: This section demonstrates that our method can generate high-quality continuous Pareto solutions by the Pareto neural model for multiobjective synthetic, design, and control problems. We evaluated our approach s performance against established methods using well-known benchmark problems such as ZDT1-2 (m=2) [42], VLMOP1-2 (m=2) [38], and real-world design problems like Four Bar Truss Design (RE21, m=2), Hatch Cover Design (RE24, m=2), and Rocket Injector Design (m=3) [43] as well as MO-LQR (m=2,3) [44]. |
| Researcher Affiliation | Academia | Xiaoyuan Zhanga, Xi Lina, Bo Xuea, Yifan Chenb, Qingfu Zhanga a Department of Computer Science, City University of Hong Kong; City University of Hong Kong Shenzhen Research Institute. b Departments of Mathematics and Computer Science, Hong Kong Baptist University. |
| Pseudocode | Yes | A.5 Pseudocode For completeness, we provide the pseudocode of the proposed method. PSL-HV1 is selected as an example. η is a positive learning rate. Algorithm 1: PSL-HV1. |
| Open Source Code | Yes | Code is available at https://github.com/xzhang2523/hvpsl/tree/master. |
| Open Datasets | Yes | We evaluated our approach s performance against established methods using well-known benchmark problems such as ZDT1-2 (m=2) [42], VLMOP1-2 (m=2) [38], and real-world design problems like Four Bar Truss Design (RE21, m=2), Hatch Cover Design (RE24, m=2), and Rocket Injector Design (m=3) [43] as well as MO-LQR (m=2,3) [44]. ... [42] Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele. Comparison of multiobjective evolutionary algorithms: Empirical results. Evolutionary computation, 8(2):173 195, 2000. ... [43] Ryoji Tanabe and Hisao Ishibuchi. An easy-to-use real-world multi-objective optimization problem suite. Applied Soft Computing, 89:106078, 2020. ... [44] Simone Parisi, Matteo Pirotta, and Jan Peters. Manifold-based multi-objective policy search with sample reuse. Neurocomputing, 263:3 14, 2017. |
| Dataset Splits | No | The paper uses well-known benchmark problems but does not specify how the data for these problems is split into training, validation, and test sets, nor does it provide absolute sample counts or references to predefined splits for reproduction. It refers to these as 'problems' rather than traditional datasets with explicit splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory, or cloud instance types) used for running its experiments. It mentions training duration and batch size, but no hardware. |
| Software Dependencies | No | The paper mentions 'pytorch' as an implementation tool but does not provide specific version numbers for PyTorch or any other software dependencies, which are necessary for full reproducibility. It refers to a 'Moon' framework but without version details. |
| Experiment Setup | Yes | We employed a 4-layer fully connected neural network, similar to [37], to construct our Pareto neural model xβ( ). The network is optimized using Stochastic Gradient Descent (SGD) with a batch size of 256 4. ... The first three layers are, xβ( ) : θ Linear(m 1, 64) Re LU Linear(64, 64) Re LU Linear(64, 64) Re LU xmid. (13) For constrained problems, to satisfy the constraint that the solution xβ(λ) must fall within the lower bound (l) and upper bound (u), a sigmoid activation function is used to map the previous layer s output to these boundaries, xmid Linear(64, n) Sigmoid (u l) + l Output xβ(λ). (14) For unconstrained problems, the output solution is obtained through a linear combination of xmid, xmid Linear(64, n) Output xβ(λ). (15). All experiments were conducted with 1000 iterations for PSL-HV1, PSL-HV2, LS-based PSL, and Tchebycheff-based PSL, while EPO-based PSL is limited to 100 iterations due to time limitation. |