DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization

Authors: Haoran Ye, Jiarui Wang, Zhiguang Cao, Helan Liang, Yong Li

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experimentation. We evaluate Deep ACO on eight representative COPs. As shown in Table 1, Deep ACO demonstrates competitive performance against the SOTA NCO methods for TSP.
Researcher Affiliation Academia 1 School of Computer Science & Technology, Soochow University 2 School of Computing and Information Systems, Singapore Management University 3 Department of Electronic Engineering, Tsinghua University
Pseudocode Yes Algorithm 1 NLS
Open Source Code Yes Our code is publicly available at https://github.com/henry-yeh/Deep ACO.
Open Datasets Yes Benchmarks We evaluate Deep ACO on eight representative COPs, including the Traveling Salesman Problem (TSP), Capacitated Vehicle Routing Problem (CVRP), Orienteering Problem (OP), Prize Collecting Traveling Salesman Problem (PCTSP), Sequential Ordering Problem (SOP), Single Machine Total Weighted Tardiness Problem (SMTWTP), Resource-Constrained Project Scheduling Problem (RCPSP), and Multiple Knapsack Problem (MKP). For RCPSP: we draw the instances from PSPLIB [53] for training and held-out test. For TSP: We draw all 49 real-world symmetric TSP instances featuring EUC_2D and containing less than 1K nodes from TSPLIB.
Dataset Splits No The paper mentions 'held-out test instances' and models trained on specific problem scales (e.g., TSP20, TSP100, TSP500) but does not provide specific quantitative details about dedicated validation dataset splits.
Hardware Specification Yes Unless otherwise stated, we conduct experiments on 48-core Intel(R) Xeon(R) Platinum 8350C CPU and an NVIDIA Ge Force RTX 3090 Graphics Card.
Software Dependencies No The paper mentions several algorithms and models (e.g., Transformer, GNN) and references general concepts like batch normalization and activation functions. However, it does not provide specific version numbers for software dependencies or libraries used for implementation (e.g., Python, PyTorch, TensorFlow, or specific library versions).
Experiment Setup Yes α and β are the control parameters, which are consistently set to 1 in this work unless otherwise stated. We apply the generic 2-opt for NLS (TNLS = 10, Tp = 20) and set W in Eq. (4) to 9. We apply 4 decoder heads for Multihead Deep ACO and set K to 5 for the top-K entropy loss. The coefficients for the additional loss terms (i.e., LKL, LH, and LI) are set to 0.05, 0.05, and 0.02, respectively.