Exact Combinatorial Optimization with Graph Convolutional Neural Networks

Authors: Maxime Gasse, Didier Chetelat, Nicola Ferroni, Laurent Charlin, Andrea Lodi

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on four classes of NP-hard problems, namely set covering, combinatorial auction, capacitated facility location and maximum independent set. We compare against the previously proposed approaches of Khalil et al. [30], Alvarez et al. [4] and Hansknecht et al. [24], as well as against the default hybrid branching rule in SCIP [20], a modern open-source solver. The results show that our choice of model, state encoding, and training procedure leads to policies that can offer a substantial improvement over traditional branching rules, and generalize well to larger instances than those used in training. ... We now present a comparative experiment against three competing machine learning approaches and SCIP s default branching rule to assess the value of our approach, as well as an ablation study to validate our architectural choices.
Researcher Affiliation Academia Maxime Gasse Mila, Polytechnique Montréal maxime.gasse@polymtl.ca Didier Chételat Polytechnique Montréal didier.chetelat@polymontreal.ca Nicola Ferroni University of Bologna n.ferroni@specialvideo.it Laurent Charlin Mila, HEC Montréal laurent.charlin@hec.ca Andrea Lodi Mila, Polytechnique Montréal andrea.lodi@polymtl.ca
Pseudocode No The paper describes its method in prose and provides a diagram (Figure 2) but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code for reproducing all the experiments can be found at https://github.com/ds4dm/learn2branch.
Open Datasets Yes Our first benchmark is comprised of set covering instances generated following Balas and Ho [7], with 1,000 columns. ... Our second benchmark is comprised of combinatorial auction instances, generated following the arbitrary relationships procedure of Leyton-Brown et al. [37, Section 4.3]. ... Our third benchmark is comprised of capacitated facility location instances generated following Cornuejols et al. [14], with 100 facilities. ... Finally, our fourth benchmark is comprised of maximum independent set instances on Erd os-Rényi random graphs, generated following the procedure of Bergman et al. [11, 4.6.4] with affinity set to 4.
Dataset Splits Yes Namely, for each benchmark, we generate 100,000 branching samples extracted from 10,000 randomly generated instances for training, 20,000 branching samples from 2,000 instances for validation, and same for test (see supplementary materials for details).
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. It only mentions using SCIP 6.0.1 as the backend solver.
Software Dependencies Yes Throughout all experiments we use SCIP 6.0.1 as the backend solver
Experiment Setup No The paper describes the general training procedure, model architecture, and dataset generation, but does not provide specific numerical hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings in the main text. It refers to supplementary materials for some details, but these are not provided in the paper itself.