Learning to Configure Separators in Branch-and-Cut

Authors: Sirui Li, Wenbin Ouyang, Max Paulus, Cathy Wu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive computational experiments demonstrate that our method achieves significant speedup over the competitive MILP solver SCIP on a variety of benchmark MILP datasets and objectives.
Researcher Affiliation Academia MIT siruil@mit.edu Wenbin Ouyang* MIT oywenbin@mit.edu Max B. Paulus ETH Zürich max.paulus@inf.ethz.ch MIT cathywu@mit.edu
Pseudocode Yes The detailed algorithm and discussions of the filtering and termination procedure are provided in Appendix A.3. ... The detailed algorithm is provided in Alg. 2 of Appendix A.4. ... We provide the complete training procedure in Alg. 3 of Appendix A.5.
Open Source Code Yes Our code is publicly available at https://github.com/mit-wu-lab/learning-to-configure-separators.
Open Datasets Yes We divide the experiment section into two main parts. First, we evaluate our method on standard MILP benchmarks from Tang et al. [51] and Ecole [43], where the number of variables and constraints range from 60 to 10, 000. ... Second, we examine the efficacy of our method by applying it to large-scale real-world MILP benchmarks, including the MIPLIB [20], NN Verification [40], and Load Balancing in the ML4CO challenges [19].
Dataset Splits Yes By default, we generate a training set Ksmall of 100 instances for configuration space restriction, another training set Klarge of 800 for predictor network training, a validation set of 100 instances, and a test set of 100 instances for each class Appendix A.6 provides full details of the setup.
Hardware Specification No The paper mentions "48 CPU processes" for reward label collection and HPC resources (MIT Super Cloud and Lincoln Laboratory Supercomputing Center) but does not provide specific CPU models, GPU models, or detailed hardware specifications.
Software Dependencies No The paper mentions using SCIP and Gurobi solvers, and PySCIPOpt, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We train the networks with ADAM [32] under a learning rate of 10^-3. The reward label collection is performed via multi-processing with 48 CPU processes. As in previous works [51, 42, 54], we train separate models for each MILP class.