Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Solving Quadratic Programs via Deep Unrolled Douglas-Rachford Splitting
Authors: Jinxin Xiong, Xi Gao, Linxin Yang, Jiang Xue, Xiaodong Luo, Akang Wang
TMLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the proposed framework, we first analyze the convergence behavior of Algorithm 2 on synthetic instances of varying sizes. Next, we compare its performance against state-of-the-art solvers and learningbased baselines on both synthetic and real-world datasets. Finally, we provide a detailed discussion of the results. The code is publicly available at https://github.com/Net Sys Opt/DR-GD.git. |
| Researcher Affiliation | Academia | Jinxin Xiong1,2, Xi Gao3, Linxin Yang1,2, Jiang Xue3, Xiaodong Luo1,2, Akang Wang1,2 1School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 2Shenzhen International Center for Industrial and Applied Mathematics, Shenzhen Research Institute of Big Data, China 3School of Mathematics and Statistics, Xi an Jiaotong University, China Corresponding Author: Akang Wang <EMAIL> |
| Pseudocode | Yes | Algorithm 1 Douglas-Rachford Splitting Algorithm 2 DR-GD Algorithm Algorithm 3 DR-GD Net Algorithm 4 DR-GD Net with multi-gradient steps |
| Open Source Code | Yes | The code is publicly available at https://github.com/Net Sys Opt/DR-GD.git. |
| Open Datasets | Yes | The datasets used in this work include synthetic benchmarks and perturbed real-world instances. Specifically, the datasets are: (i) QP (RHS) (Donti et al., 2021): Convex QPs parameterized only by the right-hand side of equality constraints, generated as in Donti et al. (2021), with n = 200, 500, 1000 in the experiments. (ii) QP (Gao et al., 2024): The dataset was generated as in Gao et al. (2024), where all the parameters are perturbed by a random factor sampled from U[0.9, 1.1]. (iii) QPLIB (Furini et al., 2019): Selected instances from Furini et al. (2019) with all parameters perturbed by a random factor sampled from U[0.9, 1.1]. (iv) Portfolio (Stellato et al., 2020): Consider the portfolio optimization problem, as introduced in Stellato et al. (2020), which is formulated as follows: |
| Dataset Splits | Yes | For each dataset, 400 samples are generated for training, 40 for validation, and 100 for testing. All reported results are based on the test set. |
| Hardware Specification | Yes | All experiments were conducted on an NVIDIA Ge Force RTX 3090 GPU and a 12th Gen Intel(R) Core(TM) i9-12900K CPU, using Python 3.9.17, Py Torch 2.0.1, SCS 3.2.6 (O Donoghue, 2021) and OSQP 0.6.7 (Stellato et al., 2020). |
| Software Dependencies | Yes | All experiments were conducted on an NVIDIA Ge Force RTX 3090 GPU and a 12th Gen Intel(R) Core(TM) i9-12900K CPU, using Python 3.9.17, Py Torch 2.0.1, SCS 3.2.6 (O Donoghue, 2021) and OSQP 0.6.7 (Stellato et al., 2020). |
| Experiment Setup | Yes | In all experiments, DR-GD Nets with 4 layers and embedding sizes of 128 are trained with a batch size of 2, a learning rate of 10 5, and the Adam optimizer (Kingma, 2014). The parameters ηl are set to 0.05 for QPLIB datasets and 0.1 for the other datasets. Early stopping is employed to terminate training if the validation loss shows no improvement for 10 consecutive epochs. The model achieving the best validation performance is saved for testing. |