Efficient Meta Neural Heuristic for Multi-Objective Combinatorial Optimization

Authors: Jinbiao Chen, Jiahai Wang, Zizhen Zhang, Zhiguang Cao, Te Ye, Siyuan Chen

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the multi-objective traveling salesman problem (MOTSP), multi-objective capacitated vehicle routing problem (MOCVRP), and multi-objective knapsack problem (MOKP) show that, EMNH is able to outperform the state-of-the-art neural heuristics in terms of solution quality and learning efficiency, and yield competitive solutions to the strong traditional heuristics while consuming much shorter time.
Researcher Affiliation Academia Jinbiao Chen1, Jiahai Wang1,2,3, , Zizhen Zhang1, , Zhiguang Cao4, Te Ye1, Siyuan Chen1 1School of Computer Science and Engineering, Sun Yat-sen University, P.R. China 2Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, Sun Yat-sen University, P.R. China 3Guangdong Key Laboratory of Big Data Analysis and Processing, Guangzhou, P.R. China 4School of Computing and Information Systems, Singapore Management University, Singapore
Pseudocode Yes Algorithm 1 Accelerated training process
Open Source Code Yes Our code is publicly available2 (https://github.com/bill-cjb/EMNH)
Open Datasets Yes We conduct computational experiments to evaluate the proposed method on the multi-objective traveling salesman problem (MOTSP), multi-objective capacitated vehicle routing problem (MOCVRP), and multi-objective knapsack problem (MOKP). Following the convention in [26, 12], we consider the instances of different sizes n=20/50/100 for MOTSP/MOCVRP and n=50/100/200 for MOKP. [...] We test the generalization ability of the model on 200 larger-scale random instances (n=150/200) and 3 commonly used MOTSP benchmark instances (Kro AB100/150/200) in TSPLIB [52].
Dataset Splits No The paper mentions 'a validation dataset' but does not provide specific details about its split or size, such as percentages or sample counts.
Hardware Specification Yes All experiments are run on a PC with an Intel Xeon 4216 CPU and an RTX 3090 GPU.
Software Dependencies No The paper mentions using the Adam optimizer but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes The meta-learning rate ϵ is linearly annealed to 0 from ϵ0 = 1 initially. A constant learning rate of the Adam optimizer is set to 10^-4. We set B = 64, Tm = 3000, Tu = 100, and N = M.