A GNN-Guided Predict-and-Search Framework for Mixed-Integer Linear Programming

Authors: Qingyu Han, Linxin Yang, Qian Chen, Xiang Zhou, Dong Zhang, Akang Wang, Ruoyu Sun, Xiaodong Luo

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on public datasets, and computational results demonstrate that our proposed framework achieves 51.1% and 9.9% performance improvements to MILP solvers SCIP and Gurobi on primal gaps, respectively.4 COMPUTATIONAL STUDIES We conducted extensive experiments on four public datasets with fixed testing environments to ensure fair comparisons.
Researcher Affiliation Collaboration 1 Shenzhen Research Institute of Big Data, China 2 Shandong University, China 3 School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 4 School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China 5 Huawei, China 6 Shenzhen International Center For Industrial and Applied Mathematics, Shenzhen Research Institute of Big Data, China
Pseudocode Yes Algorithm 1 Predict-and-search Algorithm
Open Source Code Yes We make our code publicly available at https://github.com/sribdcn/Predict-and-Search_MILP_method.
Open Datasets Yes Two of them come from the Neur IPS ML4CO 2021 competition Gasse et al. (2022), including the Balanced Item Placement (denoted by IP) dataset and the Workload Appointment (denoted by WA) dataset (Gasse et al., 2022). We generated the remaining two datasets: Independent Set (IS) and Combinatorial Auction (CA) using Ecole library (Prouvost et al., 2020) as Gasse et al. (2019) did.
Dataset Splits Yes Each dataset contains 400 instances, including 240 instances in the training set, 60 instances in the validation set, and 100 instances in the test set.
Hardware Specification Yes The evaluation machine has two Intel(R) Xeon(R) Gold 5117 CPUs @ 2.00GHz, 256GB ram and two Nvidia V100 GPUs.
Software Dependencies Yes SCIP 8.0.1 Bestuzheva et al. (2021), Gurobi 9.5.2 Gurobi Optimization, LLC (2022) and Py Torch 1.10.2 Paszke et al. (2019) are utilized in our experiments.
Experiment Setup Yes The loss function is specified in Equation (4) and (5) with a batch size of 8. The ADAM optimizer Kingma & Ba (2014) is used for optimizing the loss function with a start learning rate of 0.003.