Multi-objective optimization via equivariant deep hypervolume approximation

Authors: Jim Boelrijk, Bernd Ensing, Patrick Forré

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method against exact, and approximate hypervolume methods in terms of accuracy, computation time, and generalization. We also apply and compare our methods to state-of-the-art multi-objective BO methods and EAs on a range of synthetic and real-world benchmark test cases.
Researcher Affiliation Academia Jim Boelrijk AI4Science Lab, AMLab Informatics Institute, HIMS University of Amsterdam j.h.m.boelrijk@uva.nl Bernd Ensing AI4Science Lab HIMS University of Amsterdam b.ensing@uva.nl Patrick Forré AI4Science Lab, AMLab Informatics Institute University of Amsterdam p.d.forre@uva.nl
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. The methods are described in textual form.
Open Source Code Yes Code, models, and datasets used in this work can be found at: https://github.com/ Jimbo994/deephv-iclr.
Open Datasets Yes Code, models, and datasets used in this work can be found at: https://github.com/ Jimbo994/deephv-iclr. We split our datasets into 800K training points and 100K validation and test points, respectively.
Dataset Splits Yes We split our datasets into 800K training points and 100K validation and test points, respectively.
Hardware Specification Yes All computations shown in Fig. 2 were performed on an Intel(R) Xeon(R) CPU E5-2640 CPU v4. and in the case of the GPU calculations on a NVIDIA TITAN X.
Software Dependencies No The paper mentions software like Pymoo and Bo Torch but does not provide specific version numbers for these or other key software components used in the experiments.
Experiment Setup Yes All Deep HV models have been trained with a learning rate of 10 5, using Adam and the Mean Absolute Percentage Error (MAPE) loss function (de Myttenaere et al., 2016). For the separate models, we use a batch size of 64 and train for 200 epochs. ... For the models trained on all objective cases simultaneously, we train for 100 epochs with a batch size of 128.