L2P-MIP: Learning to Presolve for Mixed Integer Programming

Authors: Chang Liu, Zhichen Dong, Haobo Ma, Weilin Luo, Xijun Li, Bowen Pang, Jia Zeng, Junchi Yan

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on multiple real-world datasets show that well-trained neural networks can infer proper presolving for arbitrary incoming MIP instances in less than 0.5s, which is neglectable compared with the solving time often hours or days.
Researcher Affiliation Collaboration Chang Liu1, Zhichen Dong1, Haobo Ma1, Weilin Luo2, Xijun Li2, Bowen Pang2, Jia Zeng2, Junchi Yan1 1Department of Computer Science and Engineering, Shanghai Jiao Tong University 2Huawei Noah s Ark Lab
Pseudocode No The paper describes the algorithms and framework using text and figures (flowcharts like Figure 1, 2, 3) but does not provide any explicitly labeled pseudocode blocks or algorithm listings.
Open Source Code Yes Py Torch Code: https://github.com/Thinklab-SJTU/L2P-MIP ... We have open-sourced our code as a benchmark of utilizing machine learning to improve the presolving in MIP solvers, please refer to our Github repository for more details.
Open Datasets Yes We follow (Gasse et al., 2019; 2022) and use popular datasets in our experiments. We evaluate our approach on the four levels of difficulty: easy, medium, hard, and industrial-level datasets: 1) Easy datasets comprise three popular synthetic MIP benchmarks: Set Covering (Balas & Ho, 1980), Maximum Independent Set (Bergman et al., 2016) and Maritime Inventory Routing Problem (MIRP) (Papageorgiou et al., 2014). ... 2) Medium datasets include CORLAT (Gomes et al., 2008) and MIK (Atamtürk, 2003)... 3) Hard datasets from NeurIPS 2021 Competition (Gasse et al., 2022) include Item Placement, Load Balancing, Anonymous, and Maritime Inventory Routing problem (MIRP)...
Dataset Splits No The paper states 'splitting data into training and testing sets with 80% and 20% instances'. While the concept of 'validation' is mentioned in the framework design for evaluating performance metrics, there is no explicit percentage split provided for a dedicated validation dataset.
Hardware Specification Yes The experiments are conducted in a Linux workstation with NVIDIA 3090 GPU and AMD Ryzen Threadripper 3970X 32-Core CPU.
Software Dependencies Yes Throughout all experiments, we use SCIP 7.0.3 (Gamrath et al., 2020) as the back-end solver... Besides, we use Ecole 0.7.3 (Prouvost et al., 2020) and Py SCIPOpt 3.5.0 (Maher et al., 2016) for better implementation.
Experiment Setup Yes For SA, we set the initial temperature as 1e5 with the decay rate 0.9 until it reaches the minimum temperature of 1e-2. For the neural networks, we use ADAM with a batch size of 32, and learning rate of 1e-4, and a hidden size of 64. ... The loss functions used in our methods are List MLE (Xia et al., 2008) for the priority and Cross-Entropy (Good, 1952) for the max-round and timing. The number of epochs for training is 10,000.