Learning to CROSS exchange to solve min-max vehicle routing problems

Authors: Minjun Kim, Junyoung Park, Jinkyoo Park

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Despite the simplicity of NCE, numerical results show that the NCE trained with min-max flexible multidepot VRP (min-max FMDVRP) outperforms the meta-heuristic baselines. More importantly, it significantly outperforms the neural baselines when solving distinctive special cases of min-max FMDVRP (e.g., min-max MDVRP, min-max m TSP, min-max CVRP) without additional training.
Researcher Affiliation Academia Minjun Kim , Junyoung Park , & Jinkyoo Park KAIST, Daejeon, South Korea {minjun1212,junyoungpark,jinkyoo.park}@kaist.ac.kr
Pseudocode Yes Algorithm 1: Neuro CROSS exchange (NCE) for solving VRP family ... Algorithm 2: Neuro CROSS operation
Open Source Code No The paper does not provide an explicit statement about the release of source code or a link to a code repository for the methodology described.
Open Datasets No The paper generates its own training data: 'To train fθ( ), we use the input (τ1, τ2, a1, a2) and output y pairs obtained from 50,000 random min-max FMDVRP instances. The details of the train data generation are described in Appendix G.'. It does not provide concrete access information (e.g., a link or citation to a pre-existing public dataset) for a publicly available or open dataset used for training.
Dataset Splits No The paper describes generating training instances and evaluation instances, but does not specify explicit training/validation/test dataset splits (e.g., by percentage or fixed sample counts) from a single larger dataset. It mentions 'To evaluate fθ( ), we randomly generate 1,000 FMDVRP instances' for evaluation, and 'For small-sized problems (Nc 10), we employ CPLEX... For the larger-sized problems, we exclude CPLEX from the baselines...' for testing, but no distinct validation split.
Hardware Specification Yes We run all experiments on the server equipped with AMD Threadripper 2990WX CPU and Nvidia RTX 3090 GPU.
Software Dependencies Yes For small-sized problems (Nc 10), we employ CPLEX (Cplex, 2009) (an exact method), OR-tools (Perron & Furnon), CE (full search), Schedule Net (Park et al., 2021), greedy heuristic (Dell Amico et al., 1993), and greedy + TSP heuristic as baselines. ... We utilize elkai (Dimitrovski) to solve TSP. ... fθ( ) is trained to minimize the Huber loss for three epochs via Adam W (Loshchilov & Hutter, 2017) whose learning rate is fixed as 5 10 4.
Experiment Setup Yes The cost decrement model fθ( ) is parametrized by the GNN that contains the five attentive embedding layers. We employ 4 layered MLPs to parameterize ϕe, ϕw, ϕn and ϕc, whose hidden dimensions and activation units are 64 and Mish (Misra, 2019). fθ( ) is trained to minimize the Huber loss for three epochs via Adam W (Loshchilov & Hutter, 2017) whose learning rate is fixed as 5 10 4.