Learning Encodings for Constructive Neural Combinatorial Optimization Needs to Regret
Authors: Rui Sun, Zhi Zheng, Zhenkun Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate the capability of our work to enhance the performance of various NCO models. Results also show that the proposed LCH-Regret outperforms the previous modification methods on several typical combinatorial optimization problems. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China 2School of System Design and Intelligent Manufacturing and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | The code and Supplementary File are available at https://github.com/SunnyR7/LCH-Regret. |
| Open Datasets | Yes | The TSP instances used for training are randomly sampled coordinates that follow a uniform distribution. We also use TSP instances sampled from different distributions in the testing process. These instances of TSP are with mixed distribution or clustered distribution, which are proposed and tested in Jiang et al. (2022); Bi et al. (2022). ... For CVRP, we solve the problem prescribed by Kool, van Hoof, and Welling (2019). ... The settings of FFSP are consistent with the Mat Net (Kwon et al. 2021). ... Algorithms are investigated on several randomly generated test sets of different scales and distributions and the TSPlib (Reinelt 1991). |
| Dataset Splits | No | The paper mentions training and testing sets, and details about the training process (e.g., TSP50/TSP100 instances for training, training epochs, learning rate), but it does not explicitly specify validation dataset splits (e.g., 'X% training, Y% validation, Z% test' or specific counts for validation). |
| Hardware Specification | Yes | Models are trained on a single Tesla-V100 GPU with the same hyperparameters (e.g., the training epoch and learning rate) as the original model. |
| Software Dependencies | No | The paper mentions external tools like Concorde, LKH3, and OR Tools, but does not provide specific version numbers for these or for any other ancillary software components or libraries (e.g., PyTorch, Python version) used in their own implementation. |
| Experiment Setup | Yes | Models are trained on a single Tesla-V100 GPU with the same hyperparameters (e.g., the training epoch and learning rate) as the original model. ...Regret Mask Setups. LCH-Regret employs one-step regret, which is the most straightforward regret operation. ...the basic Regret mask is designed generally based on the following three rules. ...For CVRP, additionally, particular rules are illustrated in Appendix B of the Supplementary File. |