Learning a Latent Search Space for Routing Problems using Variational Autoencoders

Authors: André Hottung, Bhanu Bhandari, Kevin Tierney

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate CVAE-Opt on datasets of TSP and CVRP instances and compare it to state-of-the-art optimization approaches. The performance of any search algorithm depends on the structure of the search space. Ideally, solutions of similar quality should be placed in similar regions of the search space. We conduct the following experiment to evaluate if our method learns a (latent) search space in which solutions of high quality can on average be found in the proximity of other high-quality solutions
Researcher Affiliation Academia Andr e Hottung Bielefeld University Bielefeld, Germany andre.hottung@uni-bielefeld.de; Bhanu Bhandari University of Massachusetts Amherst Amherst, MA, USA bhanubhandar@cs.umass.edu; Kevin Tierney Bielefeld University Bielefeld, Germany kevin.tierney@uni-bielefeld.de
Pseudocode No The paper includes architectural diagrams (Figure 1, Figure 2) and mathematical formulations, but it does not provide any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps formatted like code.
Open Source Code Yes Our implementation of CVAE-Opt is available at https://github.com/ahottung/CVAE-Opt
Open Datasets Yes For each of these six problem classes, we generate instances with identical properties to the instances used in Kool et al. (2019) using the instance generator made available by the authors.
Dataset Splits Yes We use 93,440 instances for model training, 100 for search validation, and 1,000 for testing the search per problem class.
Hardware Specification Yes In all experiments, CVAE-Opt is run on a single Nvidia Tesla V100 GPU and a single core of a Intel Xeon 4114 CPU at 2.2 GHz
Software Dependencies No The paper mentions the use of the Adam optimizer and implies neural network libraries through architecture descriptions, but it does not specify concrete software dependencies with version numbers (e.g., 'PyTorch 1.9', 'CUDA 11.1').
Experiment Setup Yes The training batch size is set to 128 and the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 10 3 is used. In all experiments of CVAE-Opt-DE, we use a DE population size of 600. The crossover probability CR and the differential weight F of the DE are set to 0.95 and 0.3, respectively. Solutions are generated greedily by the decoder (i.e., the action with the highest probability value is selected at each step). The search terminates after 300 iterations.