Learning to Branch

Authors: Maria-Florina Balcan, Travis Dick, Tuomas Sandholm, Ellen Vitercik

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments section, we show that on many datasets based on real-world NP-hard problems, different parameters can result in B&B trees of vastly different sizes. Using an optimal parameter for one distribution on problems from a different distribution can lead to a dramatic tree size blowup. We use the C API of IBM ILOG CPLEX 12.8.0.0 to override the default variable selection rule using a branch callback. Additionally, our callback performs extra book-keeping to determine a finite set of values for the parameter that give rise to all possible B&B trees for a given instance (for the given choice of branching rules that our algorithm is learning to weight). This ensures that there are no good or bad values for the parameter that get skipped; such skipping could be problematic according to our theory in Section 3.1. We run CPLEX exactly once for each possible B&B tree on each instance. All experiments are run on a cluster of 64 c3.large Amazon AWS instances. For each of the following application domains, Figure 1 shows the average B&B tree size produced for each possible value of the µ parameter for the linear scoring rule, averaged over m independent samples from the distribution.
Researcher Affiliation Academia Maria-Florina Balcan 1 Travis Dick 1 Tuomas Sandholm 1 Ellen Vitercik 1 1Computer Science Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA.
Pseudocode Yes See Algorithm 1 in Appendix B for the pseudocode.
Open Source Code No The paper mentions using the C API of IBM ILOG CPLEX and describes their callback implementation, but it does not provide any explicit statement about releasing their own source code or a link to a repository for the methodology described.
Open Datasets Yes We use the Combinatorial Auction Test Suite (CATS) (Leyton-Brown et al., 2000) to generate these instances. We use the arbitrary generator with 200 bids and 100 goods and regions generator with 400 bids and 200 goods.
Dataset Splits No The paper discusses the use of a "training set" and generalization guarantees but does not specify details on dataset splits (e.g., exact percentages, absolute counts, or methods like k-fold cross-validation) for training, validation, or testing.
Hardware Specification Yes All experiments are run on a cluster of 64 c3.large Amazon AWS instances.
Software Dependencies Yes We use the C API of IBM ILOG CPLEX 12.8.0.0 to override the default variable selection rule using a branch callback.
Experiment Setup Yes We use the C API of IBM ILOG CPLEX 12.8.0.0 to override the default variable selection rule using a branch callback. Additionally, our callback performs extra book-keeping to determine a finite set of values for the parameter that give rise to all possible B&B trees for a given instance (for the given choice of branching rules that our algorithm is learning to weight). We run CPLEX exactly once for each possible B&B tree on each instance. Following Khalil et al. (2016); Fischetti & Monaci (2012), and Karzan et al. (2009), we disable CPLEX s cuts and primal heuristics, and we also disable its root-node preprocessing. The CPLEX node selection policy is set to best bound (aka. A in AI), which is the most typical choice in MILP. For each of the following application domains, Figure 1 shows the average B&B tree size produced for each possible value of the µ parameter for the linear scoring rule, averaged over m independent samples from the distribution.