Offline Multi-Objective Optimization

Authors: Ke Xue, Rongxi Tan, Xiaobin Huang, Chao Qian

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show improvements over the best value of the training set, demonstrating the effectiveness of offline MOO methods. [...] In this section, we empirically examine the performance of different methods on our benchmark.
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2School of Artificial Intelligence, Nanjing University, China.
Pseudocode No The paper describes its methods narratively and outlines network structures, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/ lamda-bbo/offline-moo.
Open Datasets Yes In this paper, we propose a first benchmark for offline MOO, where the tasks range from synthetic functions to real-world science and engineering problems... To facilitate future research, we release our benchmark tasks and datasets with a comprehensive evaluation of different approaches and open-source examples. [...] NAS-Bench-201-Test, corresponding error and number of parameters are sourced from Dong & Yang (2020). Additionally, the edge GPU latency data is obtained from Li et al. (2021). [...] We consider two locomotion tasks in the popular MORL benchmark Mu Jo Co (Todorov et al., 2012). [...] Historical stock prices data of each portfolio is provided by Blank & Deb (2020). [...] We also conduct experiments on seven real-world multi-objective engineering design problems adopted from RE suite (Tanabe & Ishibuchi, 2020).
Dataset Splits Yes Thus, similar to (Trabucco et al., 2022), we remove the top solutions sorted by NSGA-II ranking with a given percentile K, where K varies according to different tasks and is usually set 40%, except for Molecule with 1.2%, RFP and Regex with 20%, and MO-CVRP with 55%.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running the experiments.
Software Dependencies Yes The implementations of NSGA-II, MOEA/D, and NSGA-III are from the open-source repository Py MOO (Blank & Deb, 2020). The implementation of MOBO is inherited from Bo Torch (Balandat et al., 2020).
Experiment Setup Yes The DNN model is trained w.r.t. offline dataset for 200 epochs with a batch size of 32. [...] We use MSE as loss function and optimize by Adam with learning rate η = 0.001 and learning-rate decay γ = 0.98. The model architecture and hyperparameters are consistently maintained across all tasks.