MILP-FBGen: LP/MILP Instance Generation with Feasibility/Boundedness
Authors: Yahong Zhang, Chenchen Fan, Donghui Chen, Congrui Li, Wenli Ouyang, Mingda Zhu, Junchi Yan
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive studies show two-fold superiority that our method ensures higher distributional similarity and 100% feasibility in both easy and hard datasets, surpassing current state-of-the-art techniques. Extensive experiments verify the higher similarity and 100% feasibility in diverse datasets, compared with current state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1AI Lab, Lenovo Research, Beijing, China 2School of Artificial Intelligence & Department of Computer Science and Engineering & Mo E Lab of AI, Shanghai Jiao Tong University, Shanghai, China. |
| Pseudocode | Yes | Algorithm 1 MILP-FBGen with task-oriented selection strategy. |
| Open Source Code | No | Source code will be made publicly available. |
| Open Datasets | Yes | Easy: This category includes two MILP datasets, i.e., Set Covering (SC) (Balas & Ho, 1980) and Mixed-integer Knapsack (MIK) (Atamt urk, 2003). ... Medium: This group is consisted of two publicly available LP datasets, Maritime Inventory Routing Problems (MIRP) (Fan et al., 2023) and PDS (Mittelmann). ... Hard: We adopt the MILP datasets utilized in ML4CO (Gasse et al., 2022), including Workload Apportionment (WA) and Anonymous. Mittelmann. Benchmark of simplex lp solvers. URL https://plato.asu.edu/ftp/lptestset/. |
| Dataset Splits | Yes | For each task, 60% of the original dataset is used as the training set, 15% as the validation set, and the left 25% acts as the test set. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments (e.g., specific GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | Yes | We utilize the commercial solver Gurobi (Gurobi) to solve instances..., The scip optimization suite 8.0. https://www.scipopt.org/. 2021., We modify the open-source solver Hi GHS (Hi GHS) to enable it to receive an external initial basis and hyperparameters. |
| Experiment Setup | Yes | We generate more instances w.r.t. each original one in the training set, to double, triple, or even quadruple the original dataset size. Without specification, the double one is used. For most datasets, instance generation is performed with different replacement ratios η...By default, five-layer GNN is used, and the size of the hidden layers is 128 and the dropout ratio is 0.1. We train for 800 epochs and select the best checkpoint based on the validation accuracy. ... We choose 11 key parameters of the primal simplex method in Hi GHS (Hi GHS) and the details of parameters are described in Appendix D.2.2. |