Searching Large Neighborhoods for Integer Linear Programs with Contrastive Learning

Authors: Taoan Huang, Aaron M Ferber, Yuandong Tian, Bistra Dilkina, Benoit Steiner

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we show that CL-LNS outperforms state-of-the-art ML and non-ML approaches at different runtime cutoffs ranging from a few minutes to an hour in terms of multiple metrics, including the primal gap, the primal integral, the best performing rate and the survival rate, demonstrating the effectiveness and efficiency of CL-LNS. In this section, we introduce our evaluation setup and then present the results. Our code and other resources are available at https://taoanhuang.github.io/LNS_ILP.
Researcher Affiliation Collaboration 1University of Southern California 2Meta AI, FAIR 3Anthropic.
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code and other resources are available at https://taoanhuang.github.io/LNS_ILP.
Open Datasets Yes We evaluate on four NP-hard problem benchmarks that are widely used in existing studies (Wu et al., 2021; Song et al., 2020; Scavuzzo et al., 2022), which consist of two graph optimization problems, namely the minimum vertex cover (MVC) and maximum independent set (MIS) problems, and two non-graph optimization problems, namely the combinatorial auction (CA) and set covering (SC) problems.
Dataset Splits Yes For data collection and training, we generate another set of 1,024 small instances for each problem. We split these instances into training and validation sets, each consisting of 896 and 128 instances, respectively.
Hardware Specification Yes We conduct experiments on 2.5GHz Intel Xeon Platinum 8259CL CPUs with 32 GB memory. Training is done on a NVIDIA A100 GPU with 40 GB memory.
Software Dependencies Yes In experiments, the LB ILP is solved with SCIP 8.0.1 (Bestuzheva et al., 2021) with an hour runtime limit and kt is fine-tuned for each type of instances.
Experiment Setup Yes We use the Adam optimizer (Kingma & Ba, 2015) with learning rate 10^-3. We use a batch size of 32 and train for 30 epochs (the training typically converges in less than 20 epochs and 24 hours).