Learning to Cut by Looking Ahead: Cutting Plane Selection via Imitation Learning
Authors: Max B Paulus, Giulia Zarpellon, Andreas Krause, Laurent Charlin, Chris Maddison
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments with a B&C solver for neural network verification further validate our approach, and exhibit the potential of learning methods in this setting. |
| Researcher Affiliation | Academia | 1Department of Computer Science, ETH Z urich, Z urich, Switzerland 2Vector Institute, Toronto, Canada 3Department of Decision Sciences, HEC Montr eal, Montr eal, Canada 4Mila, Montr eal, Canada 5Department of Computer Science and Department of Statistical Sciences, University of Toronto, Toronto, Canada. |
| Pseudocode | No | The paper describes methods and architectures but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We compare Lookahead to common heuristics for cut selection on 510 bounded and feasible instances from the easy collection of MIPLIB 2017 (Gleixner et al., 2021). and We consider the four classes of MILP models used in Tang et al. (2020) and reproduce their generators for instances of type Maximum Cut, Packing, Binary Packing and Planning; we refer to Tang et al. (2020) supplementary material for details on the models mathematical formulations. and The dataset we use to stress-test our policies was open-sourced by Nair et al. (2020).3https://github.com/deepmind/ deepmind-research/tree/master/neural_mip_ solving |
| Dataset Splits | Yes | For each domain, we generated 1000 instances for training, 500 instances for validation and 500 instances for testing. and We considered 1260 instances for training, 271 instances for validation and 276 instances for testing. |
| Hardware Specification | Yes | We collect data for training and evaluate all methods on a distributed compute cluster which predominantly contains Intel Xeon E3-1585Lv5 CPUs. A single GPU device (NVidia Ge Force GTX 1080 Ti) is only used for training the models, but not for evaluation. |
| Software Dependencies | Yes | We derive a custom version of the SCIP solver (v.7.0.2) to expose and isolate the cut selection task as defined in Section 3. |
| Experiment Setup | Yes | We train Neural Cut policies with ADAM (Kingma & Ba, 2015) using the default Py Torch setting (Paszke et al., 2019). and In our experiments, we fix K = 10, similarly to SCIP s own parameter. and After pre-solving each instance, we perform 30 separation rounds and add a single cut per round to the LP relaxation. |