Pretrained Optimization Model for Zero-Shot Black Box Optimization

Authors: Xiaobin Li, Kai Wu, yujian li, Xiaoyu Zhang, Handing Wang, Jing Liu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation on the BBOB benchmark and two robot control tasks demonstrates that POM outperforms state-of-the-art black-box optimization methods, especially for high-dimensional tasks. Fine-tuning POM with a small number of samples and budget yields significant performance improvements. Moreover, POM demonstrates robust generalization across diverse task distributions, dimensions, population sizes, and optimization horizons.
Researcher Affiliation Academia Xiaobin Li Xidian University 22171214784@stu.xidian.edu.cn Kai Wu Xidian University kwu@xidian.edu.cn Yujian Betterrest Li Xidian University bebetterest@outlook.com Xiaoyu Zhang Xidian University xiaoyuzhang@xidian.edu.cn Handing Wang Xidian University hdwang@xidian.edu.cn Jing Liu Xidian University neouma@mail.xidian.edu.cn
Pseudocode Yes Algorithm 1 Meta GBT Algorithm 2 Driving POM to Solve Problem
Open Source Code Yes For code implementation, see https://github.com/ninja-wm/POM/.
Open Datasets Yes We evaluate the generalization ability of POM across 24 BBOB functions with dimensions d = 30 and d = 100, where optimal solutions are located at 0. Figure 2 presents the critical difference diagram comparing all algorithms (refer to Appendix Tables 4 and 6, and Figures 11, 12 and 13 for detailed results). POM significantly outperforms all methods, showcasing its efficacy across varying dimensions. Despite being trained solely on TF1-TF4 with d = 10, POM excels in higher dimensions (d = {30, 100, 500}), with its performance advantage becoming more pronounced with increasing dimensionality. Particularly on complex problems F21-F24, where global structure is weak, POM lags behind LSHADE but surpasses other methods, attributed to its adaptability through fine-tuning. Tur BO [56] is the Bayesian optimization algorithm with the best performance on BBOB [57]. Under little budget conditions, the performance of POM outperforms that of Tur BO in most cases (see Appendix G for details).
Dataset Splits No The paper does not explicitly mention validation dataset splits or specific validation procedures in terms of percentages or counts for the main experiments, beyond stating that POM is trained on a set of training functions (TS) and then evaluated on benchmark problems.
Hardware Specification Yes All experiments are performed on a device with Ge Force RTX 3090 24G GPU, Intel Xeon Gold 6126 CPU and 64G RAM.
Software Dependencies No The paper mentions software packages like "Geatpy [60]", "cmaes2", and "pyade3" used for baselines, but it does not specify their exact version numbers. It mentions "Geatpy: The genetic and evolutionary algorithm toolbox with high performance in python, 2020" which gives a year but not a precise version.
Experiment Setup Yes POM is trained on TS with T = 100, n = 100, and d = 10. Detailed parameters for all compared methods are provided in Appendix E. Please refer to Appendix D for the reasons for choosing these algorithms.