IPM-LSTM: A Learning-Based Interior Point Method for Solving Nonlinear Programs
Authors: Xi Gao, Jinxin Xiong, Akang Wang, qihong duan, Jiang Xue, Qingjiang Shi
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various types of NLPs, including Quadratic Programs and Quadratically Constrained Quadratic Programs, show that our approach can significantly accelerate NLP solving, reducing iterations by up to 60% and solution time by up to 70% compared to the default solver. (Abstract) and 4 Experiments (Section title). |
| Researcher Affiliation | Collaboration | Xi Gao1, Jinxin Xiong2,3, Akang Wang2,3,*, Qihong Duan1, Jiang Xue1,*, and Qingjiang Shi2,4 1School of Mathematics and Statistics, Xi an Jiaotong University, Xi an, China 2Shenzhen Research Institute of Big Data, China 3School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 4School of Software Engineering, Tongji University, Shanghai, China |
| Pseudocode | Yes | A pseudocode of the IPM is presented as Algorithm 1. Algorithm 1 The classic IPM Inputs: An initial solution (x0, λ0, z0), σ (0, 1), k 0 Outputs: The optimal solution (x , λ , z ) 1: while not converged do 2: Update µk 3: Solve the system Jk ( xk) , ( λk) , ( zk) = F k 4: Choose αk via a line-search filter method 5: (xk+1, λk+1, zk+1) (xk, λk, zk) + αk( xk, λk, zk) 6: k k + 1 7: end while |
| Open Source Code | Yes | Our code is available at https://github.com/Net Sys Opt/IPMLSTM. |
| Open Datasets | Yes | The dataset used in this paper includes randomly generated benchmarks obtained from Chen and Burer (2012), Donti et al. (2021) and Liang et al. (2023), as well as real-world instances from Globallib (see http://www.minlplib.org). |
| Dataset Splits | Yes | For each case, we generate 10,000 samples and divide them into a 10 : 1 : 1 ratio for training, validation, and testing, respectively. |
| Hardware Specification | Yes | All our experiments were conducted on an NVIDIA RTX A6000 GPU, an Intel Xeon 2.10GHz CPU, using Python 3.10.0 and PyTorch 1.13.1. |
| Software Dependencies | Yes | All our experiments were conducted on an NVIDIA RTX A6000 GPU, an Intel Xeon 2.10GHz CPU, using Python 3.10.0 and PyTorch 1.13.1. |
| Experiment Setup | Yes | The learning rate is 0.0001, and the batch size is 128 for each task. Additional IPM-LSTM parameters for each task are provided in Appendix C. (Section 4.1) and All LSTM networks have a single layer and are trained using the Adam optimizer (Kingma, 2014). (Section 4.1) |