PDHG-Unrolled Learning-to-Optimize Method for Large-Scale Linear Programming

Authors: Bingheng Li, Linxin Yang, Yupeng Chen, Senmiao Wang, Haitao Mao, Qian Chen, Yao Ma, Akang Wang, Tian Ding, Jiliang Tang, Ruoyu Sun

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our approach can significantly accelerate LP solving, achieving up to a 3 speedup compared to FOMs for large-scale LP problems.
Researcher Affiliation Academia 1Michigan State University 2School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 3Shenzhen International Center For Industrial and Applied Mathematics, Shenzhen Research Institute of Big Data 4School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China 5Rensselaer Polytechnic Institute.
Pseudocode No The paper describes the 'Primal-Dual Hybrid Gradient (PDHG)' algorithm with update rules and mathematical equations, and diagrams its architecture in Figure 1, but it does not present a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes For reproducibility, our code can be found at https: //github.com/Net Sys Opt/PDHG-Net.git.
Open Datasets Yes Specifically, our experiments involve the Page Rank dataset for large-scale LP problems, as well as the Item Placement (IP) and the Workload Appropriation (WA) datasets (Gasse et al., 2022) for complex LP relaxations. Particularly, the Page Rank dataset is acquired by generating instances in accordance with the methodology outlined in (Applegate et al., 2021).
Dataset Splits Yes The dataset {( M1, z 1), . . . , ( M|I|, z |I|)} is then split into training and validation sets with a 9 : 1 ratio.
Hardware Specification Yes all computational studies are carried out on a cloud server with one NVIDIA Tesla V100-32GB GPU, one Intel Xeon Platinum 8280 CPU and 3,072 GB RAM.
Software Dependencies Yes we employ the off-the-shelf implementation of PDLP from Google s Or Tools 9.8.3296 (Perron & Furnon). Gurobi 10.0.2 s LP solvers (Gurobi Optimization, LLC, 2023) are also employed for experiments on the Page Rank dataset. Our proposed framework was implemented utilizing Py Torch version 2.1.1. (Paszke et al., 2019)
Experiment Setup Yes we utilize a 4-layer PDHG-Net to strike an optimal balance between network performance and inference costs... The training process entailed minimizing the ℓ2 square loss, as described in Equation 4, and updating the trainable parameter Θ through backward propagation, where we utilize an Adam optimizer with a learning rate of 10 4.