Smart Initial Basis Selection for Linear Programs
Authors: Zhenan Fan, Xinglu Wang, Oleksandr Yakovenko, Abdullah Ali Sivas, Owen Ren, Yong Zhang, Zirui Zhou
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the effectiveness of our proposed strategy by extensively testing it with state-of-the-art simplex solvers, including the open-source solver Hi GHS and the commercial solver Opt Verse. Through these rigorous experiments, we demonstrate that our strategy achieves substantial speedup and consistently outperforms existing rule-based methods. |
| Researcher Affiliation | Collaboration | 1Huawei Technologies Canada, Burnaby, Canada 2Simon Fraser University, Burnaby, Canada. |
| Pseudocode | Yes | Algorithm 1 Smart Initial Basis Selection Algorithm for Linear Programs |
| Open Source Code | Yes | Our code is publicly available at Huawei AI Gallery 1. 1https://developer.huaweicloud.com/develop/aigallery/notebook/detail?id=ce45dd10-44ce-43bb-89c8-1f3277f1132d |
| Open Datasets | Yes | The publicly available datasets include the Maritime Inventory Routing Problems (MIRP) (Papageorgiou et al., 2014), the One-norm Support Vector Machine Instances (LIBSVM) (Zhu et al., 2003; Applegate et al., 2021), and the 2-stage Stochastic Problems (STOCH) (Castro & de la Lama-Zubir an, 2020). |
| Dataset Splits | Yes | To ensure a fair evaluation, each dataset is split into training and test sets in a 7:3 ratio. |
| Hardware Specification | Yes | The GNN model is trained on NVIDIA V100. The evaluation is conducted on a system comprised of 8 cores CPU (Intel Xeon E5-2690 v4) and 64 G of memory, utilizing Ubuntu 18.04 Docker containers for solver execution. |
| Software Dependencies | Yes | We implement our approach with Python 3.7, Py Torch 1.8 and Py G framework (Fey & Lenssen, 2019). |
| Experiment Setup | Yes | By default, 3 layers lightweight GNN is used for SC-1 and SC-2 datasets, and 5 layers GNN is used for other datasets. The size of the hidden layers is 128 and the dropout ratio is 0.1. ... We do not tune hyper-parameter for training GNN and just adopt the commonly used one, like the Adam optimizer with initial learning rate 1e-3 and weight decay 1e-4. The learning rate decays by 0.1 every 200 epochs and the total training epoch is 800. |