Differentiable Linearized ADMM
Authors: Xingyu Xie, Jianlong Wu, Guangcan Liu, Zhisheng Zhong, Zhouchen Lin
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, empirical results that verify the proposed theories are given in Sections 5. We use both LADMM and the proposed D-LADMM to solve the above problem, and compare their results on both synthetic datasets and natural images. |
| Researcher Affiliation | Academia | 1Key Lab. of Machine Perception, School of EECS, Peking University. 2B-DAT and CICAEET, School of Automation, Nanjing University of Information Science and Technology. Correspondence to: Guangcan Liu <gcliu@nuist.edu.cn>, Zhouchen Lin <zlin@pku.edu.cn>. |
| Pseudocode | No | The paper provides mathematical equations for the iterative scheme (7) and a block diagram in Figure 1, but no formal pseudocode or algorithm block. |
| Open Source Code | Yes | Code: https://github.com/zzs1994/D-LADMM |
| Open Datasets | Yes | We first experiment with synthetic data, using similar experimental settings as (Chen et al., 2018). We also evaluate the considered methods on the task of natural image denoising... The experimental data is a classic dataset consisting of 12 natural images, called Waterloo Brag Zone Greyscale set. |
| Dataset Splits | No | The numbers of training and testing samples are set to 10,000 and 1,000, respectively. The paper specifies training and testing samples but does not explicitly mention a validation set or its split. |
| Hardware Specification | No | The paper mentions training times (e.g., 'D-LADMM needs 5 and 9 minutes') but provides no specific details on the hardware used, such as GPU or CPU models. |
| Software Dependencies | No | Our DLADMM is implemented on the Py Torch platform. The paper mentions PyTorch but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | For the proposed D-LADMM, the number of layers is set to K = 15. SGD is adopted to update the parameters with learning rate lr = 0.01. Regarding the activation function, we use the softshrink operator by Beck & Teboulle (2009). The parameter in problem (21) is set as λ = 0.5. |