Learning Linear Programs from Optimal Decisions
Authors: Yingcong Tan, Daria Terekhov, Andrew Delong
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our method successfully learns synthetic linear programs and minimum-cost multi-commodity flow instances for which previous methods are not directly applicable. We evaluate our approach by learning a range of synthetic LPs and parametric instances of minimum-cost multi-commodity flow problems. Use of synthetic instances is common in IO (e.g., Ahuja and Orlin [2001], Keshavarz et al. [2011], Dong et al. [2018]) and there are no community-established and readily-available benchmarks, especially for more general formulations. Our experimental study considers instances not directly addressable by previous IO work, either because we learn all coefficients jointly or because the parametrization results in non-convex NLP. We compare four versions of our gradient-based method (SQPbprop, SQPimpl, SQPdir, SQPcvx) with two gradient-free methods: random search (RS) and COBYLA. A gradient-free baseline is applicable to (ILOP) only if it (i) supports general bilevel natively, or (ii) allows the objective and constraints to be specified by callbacks. COBYLA is conceptually similar to SQP and can be readily applied to (ILOP), but many otherwise-powerful solvers such as BARON [Sahinidis, 2017], CPLEX [IBM, 2020] and Gurobi [Gurobi Optimization, 2020] cannot. Complete experimental results for synthetic LPs are presented in Figures 3, 7, 8, 9. The main observation is that the gradient-based methods perform similarly and become superior to gradient-free methods as the dimension K of parametrization w increases. |
| Researcher Affiliation | Academia | Concordia University, Montréal, Canada {yingcong.tan, daria.terekhov, andrew.delong}@concordia.ca |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code to reproduce experiments is available at https://github.com/yingcongtan/ilop. |
| Open Datasets | Yes | We evaluate our approach by learning a range of synthetic LPs and parametric instances of minimum-cost multi-commodity flow problems. Use of synthetic instances is common in IO (e.g., Ahuja and Orlin [2001], Keshavarz et al. [2011], Dong et al. [2018]) and there are no community-established and readily-available benchmarks, especially for more general formulations. We used the LP generator of Tan et al. [2019], modifying it to create a more challenging variety of feasible regions. Fig. 4 shows a visualization of our experiment on the Nguyen-Dupuis graph [Nguyen and Dupuis, 1984]. |
| Dataset Splits | Yes | Box plots show final training and testing loss of 100 different trial instances, each with 20 training and 20 testing points (distinct u values). |
| Hardware Specification | Yes | Experiments used Py Torch v1.6 nightly build, the COBYLA and SLSQP wrappers from Sci Py v1.4.1, and were run on an Intel Core i7 with 16GB RAM. |
| Software Dependencies | Yes | Experiments used Py Torch v1.6 nightly build, the COBYLA and SLSQP wrappers from Sci Py v1.4.1 |
| Experiment Setup | Yes | We do not regularize w nor have any other hyperparameters. Our method can therefore be viewed as a way to boost the probability of successful training when combined with simple global optimization strategies such as multi-start. |