SYMBOL: Generating Flexible Black-Box Optimizers through Symbolic Equation Learning
Authors: Jiacheng Chen, Zeyuan Ma, Hongshu Guo, Yining Ma, Jie Zhang, Yue-Jiao Gong
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments reveal that the optimizers generated by SYMBOL not only surpass the state-of-the-art BBO and Meta BBO baselines, but also exhibit exceptional zero-shot generalization abilities across entirely unseen tasks with different problem dimensions, population sizes, and optimization horizons. Furthermore, we conduct in-depth analyses of our SYMBOL framework and the optimization rules that it generates, underscoring its desirable flexibility and interpretability. |
| Researcher Affiliation | Academia | Jiacheng Chen1, , Zeyuan Ma1, , Hongshu Guo1, Yining Ma2, Jie Zhang2, Yue-Jiao Gong1, 1 South China University of Technology 2 Nanyang Technological University {jackchan9345, scut.crazynicolas, guohongshu369}@gmail.com, {yining.ma, zhangj}@ntu.edu.sg , gongyuejiao@gmail.com |
| Pseudocode | Yes | Algorithm 2 illustrates the pseudo code for the training process of three strategies of SYMBOL. |
| Open Source Code | Yes | We release the implementation python codes at https://github.com/GMC-DRL/Symbol, where we show how to train SYMBOL with different strategies, and how to generalize it to unseen problems. |
| Open Datasets | Yes | Our training dataset is synthesized based on the well-known IEEE CEC Numerical Optimization Competition (Mohamed et al., 2021) benchmark, which contains ten challenging synthetic BBO problems (f1 f10). |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning for validation. |
| Hardware Specification | Yes | All experiments are run on a machine with Intel i9-10980XE CPU, RTX 3090 GPU and 32GB RAM. |
| Software Dependencies | No | The paper mentions 'implementation python codes' but does not specify version numbers for Python or any specific libraries/dependencies used. |
| Experiment Setup | Yes | The tunable parameter λ for SYMBOL-S is set to 1. We simultaneously sample a batch of N = 32 problems from problem distribution D for training. The pre-defined maximum learning steps for PPO is 5 104. The learning rate α is 10 3. The number of generations (T) for lower-level optimization is 500. |