Learning to Remove Cuts in Integer Linear Programming

Authors: Pol Puigdemont, Stratis Skoulakis, Grigorios Chrysos, Volkan Cevher

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We divide our experiments in two main parts. The first one focuses on evaluating the performance of cut removal acting against multiple benchmark policies by rolling them out on synthetic test MILP instances for each of the problem families in a controlled environment. Next, we investigate how well do our trained models generalize to larger instances.
Researcher Affiliation Academia 1LIONS, Ecole Polytechnique F ed erale de Lausanne, Switzerland 2Work developed during an exchange coming from Universitat Polit ecnica de Catalunya (UPC), Spain 3Department of Electrical and Computer Engineering, University of Wisconsin-Madison, USA.
Pseudocode Yes Algorithm 1 Cutting plane method Algorithm 2 Cutting Plane method with Cut Removal
Open Source Code No The paper does not provide a direct link to its source code or explicitly state that the code is publicly released.
Open Datasets No The paper describes how it generates its own instances for different problem families (e.g., 'For set cover we suggest our own probabilistic formulation. Details on the generation of the instances can be found in the Appendix B.1.'). It does not provide concrete access information (link, DOI, specific repository) to a pre-existing publicly available dataset.
Dataset Splits Yes For each problem family, we use 2000 instances for training, 500 instances for validation and 500 instances for testing as done in Paulus et al. (2022).
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU/CPU models, memory) used for running the experiments. It only mentions general 'compute resources'.
Software Dependencies Yes In order to stress-test our environment and compute the optimal solution required to obtain the IGC metrics we use the SCIP solver (Bestuzheva et al., 2023).
Experiment Setup Yes We trained our models with SGD with a lr of 5 10 3 for 50 epochs using a batch size of 104 with a patience parameter of 5.