Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient

Authors: David Applegate, Mateo Diaz, Oliver Hinder, Haihao Lu, Miles Lubin, Brendan O'Donoghue, Warren Schudy

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. ... We compare PDLP with SCS, an ADMM-based solver, on a set of 383 LP instances derived from MIPLIB 2017.
Researcher Affiliation Collaboration David Applegate Google Research dapplegate@google.com Mateo Díaz California Institute of Technology mateodd@caltech.edu Oliver Hinder Google Research University of Pittsburgh ohinder@pitt.edu Haihao Lu University of Chicago haihao.lu@chicagobooth.edu Miles Lubin Google Research mlubin@google.com Brendan O Donoghue Deep Mind bodonoghue@deepmind.com Warren Schudy Google Research wschudy@google.com
Pseudocode Yes Algorithm 1 presents pseudo-code for PDLP after preprocessing steps.
Open Source Code Yes PDLP is implemented in an open-source Julia [15] module available at https: //github.com/google-research/First Order Lp.jl.
Open Datasets Yes We use three datasets to compare algorithmic performance. One is the LP benchmark dataset of 56 problems, formed by merging the instances from Benchmark of Simplex LP Solvers , Benchmark of Barrier LP solvers , and Large Network-LP Benchmark from [45]. We also created a larger benchmark of 383 instances curated from LP relaxations of mixed-integer programming problems from the MIPLIB2017 collection [34] (see Appendix B) that we label MIP Relaxations. ... Finally, we also performed some experiments on the Netlib LP benchmark [31], an historically important benchmark that is no longer state of the art for large-scale LP.
Dataset Splits Yes MIP Relaxations was used extensively during algorithmic development, e.g., for hyperparameter choices; we held out LP benchmark as a test set.
Hardware Specification Yes We used two computing environments for our experiments: 1) e2-highmem-2 virtual machines (VMs) on Google Cloud Platform (GCP). Each VM provides two virtual CPUs and 16GB RAM. 2) A dedicated workstation with an Intel Xeon E5-2669 v3 processor and 128 GB RAM.
Software Dependencies Yes PDLP is implemented in an open-source Julia [15] module... SCS [54] version 2.1.3, an opensource generic cone solver based on ADMM, and Gurobi version 9.0.1, a state-of-the-art commercial LP solver.
Experiment Setup Yes PDLP terminates with an approximately optimal solution when the primal-dual iterates x X, y Y , λ Λ, satisfy: q y + l λ+ u λ c x ϵ(1 + q y + l λ+ u λ + c x ) (6a) Ax b (h Gx)+ 2 ϵ(1 + q 2) (6b) c K y λ 2 ϵ(1 + c 2) (6c) where ϵ (0, ) is the termination tolerance. ... We use ϵ = 10 8 as a benchmark for high-quality solutions and ϵ = 10 4 for moderately accurate solutions. All first-order methods use all-zero vectors as the initial starting points. For Section 4.2 we impose a limit on the KKT passes of 100, 000. For Section 4.3 we impose a time limit of 1 hour.