Improved CP-Based Lagrangian Relaxation Approach with an Application to the TSP

Authors: Raphaël Boudreault, Claude-Guy Quimper

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on TSP instances show that our algorithms allow significant gains on the resolution time and the size of the search space when compared to the state-of-the-art implementation.
Researcher Affiliation Academia Rapha el Boudreault and Claude-Guy Quimper Universit e Laval, Qu ebec, Canada raphael.boudreault.1@ulaval.ca, claude-guy.quimper@ift.ulaval.ca
Pseudocode Yes Algorithm 1: SIMPLE(i, j) if {i, j} T then {k, l} GETREPLACEMENTEDGE(i, j) 1 U (ZLR(λ) + w(k, l) w(i, j)) if i / {1, k, l} deg T (i) 2 then // Decrease λi 2 v ( 1) MAXDECREASE(i, j) (v (deg T (i) 2) v) if j / {1, k, l} deg T (j) 2 then // Decrease λj 3 v ( 1) MAXDECREASE(j, i) (v (deg T (j) 2) v) if < 0 then dom(x{i,j}) dom(x{i,j}) \ {0} else {k, l} GETSUPPORTEDGE(i, j) 4 U (ZLR(λ) + w(i, j) w(k, l)) if i / {1, k, l} deg T (i) 2 then // Increase λi 5 v MAXINCREASE(i, j) (v (deg T (i) 2) + v) if j / {1, k, l} deg T (j) 2 then // Increase λj 6 v MAXINCREASE(j, i) (v (deg T (j) 2) + v) if < 0 then dom(x{i,j}) dom(x{i,j}) \ {1} Function MAXDECREASE(i, j) D1 α min{GETREDUCEDCOST(i, k): {i, k} E \ T} { } if j = 1 then r GETREPLACEMENTEDGE(i, j) D2 β min{ w(i, k) w(r): {i, k} R{i,j}} { } α min{α, β} return α Function MAXINCREASE(i, j) I1 α min{GETREPLACEMENTCOST(i, k): {i, k} T \ M} { } if j = 1 then s GETSUPPORTEDGE(i, j) I2 β min{ w(s) w(i, k): {i, k} C{i,j}\M} { } α min{α, β} return α
Open Source Code Yes The code is available at http://www2.ift.ulaval.ca/ quimper/publications.php.
Open Datasets Yes We chose the symmetric TSP instances from the TSPLIB library [Reinelt, 1991] between 96 and 500 nodes that could be solved by Choco under 8 hours and with at least 100 search nodes.
Dataset Splits No The paper does not provide specific train/validation/test dataset splits. TSP instances are typically solved as a whole for optimization, not split for training/validation/testing in the conventional machine learning sense.
Hardware Specification Yes The experiments were performed on a Cent OS Linux 7 machine using an Intel Xeon Silver 4110 CPU at 2.10 GHz and 32 GB of RAM.
Software Dependencies Yes The algorithms were implemented in Java 14 using the solver Choco 4.0.6 [Jussien et al., 2008] and its extension Choco Graph 4.2.3 [Fages, 2014].
Experiment Setup Yes The TSP was modeled using the WEIGHTEDCIRCUIT constraint already implemented in Choco Graph using the state-of-the-art algorithms [Benchimol et al., 2012]. It uses a subgradient descent algorithm to optimize the Lagrangian multipliers. We kept the default parameters, but we did not restart the algorithm when the lower bound of the 1-tree was increased. As an initial upper bound on the objective variable, we gave the bound provided by the LKH-2.0.9 heuristic [Helsgaun, 2000]. The search strategy was fixed to max Cost with the LCFirst policy [Fages et al., 2016]. ... for HYBRID, since α-SETS is slowed down by the number of constraints, it was only called when |E| 2|V | with a limit of 2 on the cardinality of the α-sets and a maximum of 10 iterations when trying to reach the fixed point.