ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution

Authors: Haoran Ye, Jiarui Wang, Zhiguang Cao, Federico Berto, Chuanbo Hua, HAEYEON KIM, Jinkyoo Park, Guojie Song

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Across five heterogeneous algorithmic types, six different COPs, and both white-box and black-box views of COPs, Re Evo yields state-of-the-art and competitive meta-heuristics, evolutionary algorithms, heuristics, and neural solvers, while being more sample-efficient than prior LHHs.
Researcher Affiliation Academia Haoran Ye1, Jiarui Wang2, Zhiguang Cao3, Federico Berto4, Chuanbo Hua4, Haeyeon Kim4, Jinkyoo Park4, Guojie Song1 5 1National Key Laboratory of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University 2Southeast University 3Singapore Management University 4KAIST 5PKU-Wuhan Institute for Artificial Intelligence AI4CO
Pseudocode No The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block, nor structured steps formatted like pseudocode.
Open Source Code Yes Project Website: https://ai4co.github.io/reevo
Open Datasets Yes We evaluate the constructive heuristic for TSP generated by Re Evo on real-world benchmark instances from TSPLIB [67] in Table 2.
Dataset Splits Yes We generate training and validation instances following Kim et al. [31].
Hardware Specification Yes When conducting runtime comparisons, we employ a single core of an AMD EPYC 7742 CPU and an NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions LLMs (e.g., GPT-3.5 Turbo) but does not provide specific version numbers for general software dependencies, libraries, or frameworks used for implementation beyond these models.
Experiment Setup Yes Hyperparameters for Re Evo. Unless otherwise stated, we adopt the parameters in Table 8 for Re Evo runs.