Learning to Solve Routing Problems via Distributionally Robust Optimization

Authors: Yuan Jiang, Yaoxin Wu, Zhiguang Cao, Jie Zhang9786-9794

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on the randomly synthesized instances and the ones from two benchmark dataset (i.e., TSPLib and CVRPLib) demonstrate that our approach could significantly improve the cross-distribution generalization performance over the original models.
Researcher Affiliation Collaboration Yuan Jiang1*, Yaoxin Wu1*, Zhiguang Cao2 , Jie Zhang3 1SCALE@NTU Corp Lab, Nanyang Technological University, Singapore 2Singapore Institute of Manufacturing Technology, A*STAR, Singapore 3School of Computer Science and Engineering, Nanyang Technological University, Singapore
Pseudocode Yes Algorithm 1: Group DRO for Solving VRPs
Open Source Code No The paper links to the repositories of the baseline models (GCN and POMO) that they used for comparison, but does not provide concrete access to the source code for their proposed methodology (DROG and DROP).
Open Datasets Yes We evaluate the efficacy of our approach on the randomly generated instances, and also the ones from two benchmark datasets, i.e., TSPLib and CVRPLib, respectively. ... We train our models on 150,000 instances, which include 100,000 instances from uniform distribution, and 50,000 instances from the other five, with 10,000 for each... TSPLib (Reinelt 1991) and CVRPLib (Uchoa et al. 2017)
Dataset Splits No The paper mentions counts for training and test instances ('100,000 typical instances and 10,000 atypical instances' for training, and '10,000 typical instances and 1,000 atypical instances' for testing), but does not specify a validation set or explicit percentages for the data splits.
Hardware Specification Yes We run the experiments on the device with a single Nvidia Ge Force RTX 2080Ti GPU and a single CPU of an Intel Xeon i9-10940X CPU at 3.3 GHz.
Software Dependencies Yes Our approach is implemented in Py Torch 1.2 (Paszke et al. 2019) with Python 3.7.
Experiment Setup Yes The learning rate η is 10 4 for all experiments. ... We trained the models up to 2,000 epochs ( 10 days)... Regarding the convolutional embedding layer by CNN, the input length for each node equals to 2 (coordinate) for TSP and 3 (coordinate+demand) for CVRP. We extract the spatial pattern of K=10 nearest nodes with the kernel size 11 and set the number of kernels to 128. The output dimension is fixed to 128.