Learning to Stop Cut Generation for Efficient Mixed-Integer Linear Programming
Authors: Haotian Ling, Zhihai Wang, Jie Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that HYGRO significantly improves the efficiency of solving MILPs, achieving up to 31% improvement, on six challenging MILP problem benchmarks compared to eight competitive baselines. |
| Researcher Affiliation | Academia | University of Science and Technology of China {haotianling,zhwangx}@mail.ustc.edu.cn jiewangx@ustc.edu.cn |
| Pseudocode | No | The paper describes the method and training process in text and diagrams, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states "We incorporate our proposed HYGRO into the open-source state-of-the-art MILP solver, namely SCIP (Bestuzheva et al. 2021)." but does not provide a link or explicit statement about the open-source availability of their own code for HYGRO. |
| Open Datasets | Yes | We employed six NP-hard problem datasets as benchmarks, which are categorized into two groups. Classical NP-hard combinatorial optimization problems widely served as benchmarks, including Multiple Knapsack Problem (Kellerer, Pferschy, and Pisinger 2004), Maximum Independent Set (MIS) (Hartmanis 1982), and Set Covering (Balas and Ho 1980). ... Harder datasets, including Corlat (Atamt urk 2003), MIK (Gomes, van Hoeve, and Sabharwal 2008), and Anonymous (Gasse et al. 2022) dataset. |
| Dataset Splits | No | The paper states "We divided each dataset into training and testing sets with 75% and 25% of instances, respectively," but does not explicitly mention a validation set or its split percentage. |
| Hardware Specification | No | The paper mentions "the hardware utilized in this experiment differs from that of the other experiments" but does not provide specific details such as CPU/GPU models, memory, or cloud instance types. |
| Software Dependencies | No | The paper mentions "SCIP", "ADAM optimizer", and "Py Torch" but does not provide specific version numbers for these software components, which are necessary for reproducibility. |
| Experiment Setup | No | The paper mentions a "300-second time limit" and "default SCIP parameter settings" along with the "ADAM optimizer", but does not provide specific hyperparameters such as learning rate, batch size, or number of epochs for training HYGRO. |