Learning to Search in Local Branching

Authors: Defeng Liu, Matteo Fischetti, Andrea Lodi3796-3803

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We computationally show that the neighborhood size can indeed be learned, leading to improved performances and that the overall algorithm generalizes well both with respect to the instance size and, remarkably, across instances. In this section, we present the details of our experimental results over five MILP benchmarks. We compare different settings of our approach against the original LB algorithm, using SCIP (Gerald et al. 2020) as the underlying MILP solver.
Researcher Affiliation Academia Defeng Liu 1, Matteo Fischetti 2, Andrea Lodi 1, 3 1Department of Mathematics and Industrial Engineering, Polytechnique Montr eal, Canada 2Department of Information Engineering, University of Padova, Italy 3Jacobs Technion-Cornell Institute, Cornell Tech and Technion IIT, USA
Pseudocode Yes Algorithm 1: LB with Scaled Regression; Algorithm 2: Reinforced Neighborhood Search
Open Source Code Yes Our code and more details of the experiment environment are publicly available 1. [footnote 1: https://github.com/pandat8/ML4LB]
Open Datasets Yes MILP Instances We evaluate on five MILP benchmarks: set covering (SC) (Balas and Ho 1980), maximum independent set (MIS) (Bergman et al. 2016), combinatorial auction (CA) (Leyton-Brown et al. 2000), generalized independent set problem (GISP) (Hochbaum and Pathria 1997; Colombi et al. 2017), and MIPLIB 2017 (Gleixner et al. 2021).
Dataset Splits Yes For each reference set of SC, MIS and CA problems, we generate a dataset of 200 instances, and split each dataset into training (70%), validation (10%), and test (20%) sets.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies Yes using SCIP (Gerald et al. 2020) as the underlying MILP solver. [The citation Gerald et al. 2020 mentions "The SCIP Optimization Suite 7.0."]
Experiment Setup Yes To collect data for the scaled regression task... we choose to use the grid search method. In particular, given a MILP instance, an initial incumbent x, the LP solution x , and a time limit for a LB iteration, we evaluate ϕ0 from (0, 1) with a resolution limit 0.01. The step size kstep is a hyperparameter of the algorithm. We use two initial incumbent solutions: (1) the first solution found by SCIP; (2) an intermediate solution found by SCIP... We train the regression model with two scenarios: the first one trains the model on the training set of SC, MIS and CA separately, the other one trains a single model on a mixed dataset of the three training sets.