Learning to Branch in Mixed Integer Programming

Authors: Elias Khalil, Pierre Le Bodic, Le Song, George Nemhauser, Bistra Dilkina

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on benchmark instances indicate that our method produces significantly smaller search trees than existing heuristics, and is competitive with a state-of-the-art commercial solver.
Researcher Affiliation Academia 1School of Computational Science & Engineering 2School of Industrial & Systems Engineering Georgia Institute of Technology
Pseudocode No The paper describes its framework in three phases (Data collection, Model learning, ML-based branching) but does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We use the Benchmark set from MIPLIB2010 as our test set; we refer to (Koch et al. 2011) for details.
Dataset Splits No The paper mentions collecting data for a "training dataset" and using MIPLIB2010 as a "test set," but it does not specify explicit training, validation, and test splits with percentages or sample counts, nor does it mention a dedicated validation set.
Hardware Specification Yes All experiments were run on a cluster of four 64-core machines with AMD 2.4 GHz processors and 264 GB of memory; each run was limited to 2 GB of memory
Software Dependencies Yes We use the C API of IBM ILOG CPLEX 12.6.1
Experiment Setup Yes We use α = 0.2 and SVMrank with a trade-off parameter C = 0.1 between training error and margin (λ in (4) is a function of C). We varied α {0.1, 0.2, 0.3, 0.4, 0.5} and C {0.001, 0.01, 0.1, 1} (0.01 is the default in SVMrank), and found that SB+ML performs similarly. For SB, SB+PC and SB+ML, κ is set to 10, and all SB calls are limited to 50 dual simplex iterations