Learning to Run Heuristics in Tree Search

Authors: Elias B. Khalil, Bistra Dilkina, George L. Nemhauser, Shabbir Ahmed, Yufen Shao

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, our approach improves the primal performance of a stateof-the-art Mixed Integer Programming solver by up to 6% on a set of benchmark instances, and by up to 60% on a family of hard Independent Set instances.
Researcher Affiliation Collaboration Elias B. Khalil1, Bistra Dilkina 1, George L. Nemhauser2, Shabbir Ahmed2, Yufen Shao3 1School of Computational Science & Engineering, Georgia Institute of Technology, Atlanta, GA, USA 2School of Industrial & Systems Engineering, Georgia Institute of Technology, Atlanta, GA, USA 3Exxon Mobil Upstream Research Company, Houston, TX, USA {elias.khalil, bdilkina}@cc.gatech.edu, gn3@gatech.edu, shabbir.ahmed@isye.gatech.edu, yufen.shao@exxonmobil.com
Pseudocode No The paper describes algorithms in text, such as the RWS rule, but does not include any formal pseudocode blocks or sections explicitly labeled as 'Algorithm' or 'Pseudocode'.
Open Source Code No The paper states that the authors 'modify the open-source MIP solver SCIP 3.2.1' but does not provide any information, links, or explicit statements indicating that their own modifications or the code developed for this paper are open-source or publicly available.
Open Datasets Yes Table 3 shows LOOCV results using 83 instances of the Benchmark set from MIPLIB2010 [Koch et al., 2011], for which data was collected by running SCIP for 2 hours at most, per instance.
Dataset Splits Yes We use leave-one-out cross-validation (LOOCV) on a per-instance basis: for each test instance Itest, a model is learned for a heuristic H using dataset DH train, where DH train does not include any data from Itest; the model is then tested on Itest s dataset, DH Itest.
Hardware Specification Yes All experiments were run on a cluster of four 64-core machines with AMD 2.4 GHz processors and 264 GB RAM.
Software Dependencies Yes To evaluate the proposed framework, we modify the open-source MIP solver SCIP 3.2.1 [Gamrath et al., 2016]; CPLEX 12.6.1 [IBM, 2014] is used as SCIP s LP solver. Machine learning experiments use scikit-learn [Pedregosa et al., 2011].
Experiment Setup Yes Data points with label y N H = 1 are heavily weighted in the LR loss function to account for the extreme class imbalance we encounter [He and Garcia, 2009], as can be seen in column Success Rate of Table 3. The regularization parameter of LR is kept at a default of 1.