MIP-GNN: A Data-Driven Framework for Guiding Combinatorial Solvers
Authors: Elias B. Khalil, Christopher Morris, Andrea Lodi10219-10227
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We integrate MIP-GNN into a state-of-the-art MIP solver, applying it to tasks such as node selection and warm-starting, showing significant improvements compared to the default setting of the solver on two classes of challenging binary MILPs. Our code and appendix are publicly available at https://github.com/lyeskhalil/mip GNN. |
| Researcher Affiliation | Academia | Elias B. Khalil,*1, 2 Christopher Morris,*3 Andrea Lodi4 1Department of Mechanical & Industrial Engineering, University of Toronto 2Scale AI Research Chair in Data-Driven Algorithms for Modern Supply Chains 3Mila Quebec AI Institute and Mc Gill University 4CERC, Polytechnique Montr eal and Jacobs Technion-Cornell Institute, Cornell Tech and Technion IIT khalil@mie.utoronto.ca, chris@christophermorris.info, andrea.lodi@cornell.edu |
| Pseudocode | No | The paper describes the architecture and passes using text and mathematical formulations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and appendix are publicly available at https://github.com/lyeskhalil/mip GNN. |
| Open Datasets | Yes | The generalized independent set problem (GISP) (Colombi, Mansini, and Savelsbergh 2017) and fixed-charge multi-commodity network flow problem (FCMNF) (Hewitt, Nemhauser, and Savelsbergh 2010) have been used to model a variety of applications including forest harvesting (Hochbaum and Pathria 1997) and supply chain planning, respectively. |
| Dataset Splits | Yes | During training, 20% of the training instances were used as a validation set for early stopping. |
| Hardware Specification | No | Training is done on GPUs whereas evaluation (including making predictions with trained models and solving MILPs with CPLEX) is done on CPUs with a single thread. Appendix section CPU/GPU specifications provides additional details. |
| Software Dependencies | Yes | We use CPLEX 12.10.0 as a backend for data collection and BLP solving. |
| Experiment Setup | Yes | The training algorithm is ADAM (Kingma and Ba 2015), which ran for 30 epochs with an initial learning rate of 0.001 and exponential learning rate decay with a patience of 10. |