Optimizing ADMM and Over-Relaxed ADMM Parameters for Linear Quadratic Problems

Authors: Jintao Song, Wenqi Lu, Yunwen Lei, Yuchao Tang, Zhenkuan Pan, Jinming Duan

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we will first test the generalization ability of our proposed parameter selection method through random instantiations. Following that, we will apply the proposed parameter selection methods to diffeomorphic image registration, image deblurring, and MRI reconstruction. We will compare our optimal ADMM algorithm and over-relaxed variant (o ADMM) with gradient descent (GD), gradient descent with Nesterov s acceleration (GD-N) (Nesterov 1983; Bartlett and Duan 2021), gradient descent with Nesterov s acceleration and restart (GD-NR) (O donoghue and Candes 2015; Bartlett and Duan 2021), as well as conjugate gradient (CG).
Researcher Affiliation Academia 1 School of Computer Science, University of Birmingham, UK 2 College of Computer Science and Technology, Qingdao University, China 3 Department of Computing and Mathematics, Manchester Metropolitan University, UK 4 Centre for Computational Science and Mathematical Modelling, Coventry University, UK 5 Department of Mathematics, University of Hong Kong, HK 6 School of Mathematics and Information Science, Guangzhou University, China
Pseudocode Yes Algorithm 1: ADMM for LQPs Input: matrices A and L; parameter µ and θ Initialize: u0 and b0 Repeat: wk+1 = arg min w 1 2 Lw 2 + θ 2 w uk bk 2 uk+1 = arg min u µ 2 Au f 2 + θ 2 wk+1 u bk 2 bk+1 = bk + uk+1 wk+1 until some stopping criterion is met
Open Source Code No The paper does not provide any specific links or explicit statements about releasing source code for the methodology described.
Open Datasets No The paper mentions "random instantiations", "phantom test image", and describes how data was generated for "MRI Reconstruction" (e.g., "50% of the data there was taken using a cartesian sampling mask") but does not provide concrete access information (link, DOI, citation) for any publicly available or open dataset.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits needed to reproduce the experiment.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments.
Software Dependencies No The paper does not provide specific version numbers for software components or libraries.
Experiment Setup Yes In all of our experiments, we chose the step size in gradient-based methods using the Lipschitz constant of the corresponding problem. It is worth noting that optimal values for penalty parameters can be determined analytically for image deblurring and MRI reconstruction problems. However, for image registration numerical gradient descent is required to compute these parameters. [...] Note that we set the regularization parameter µ to 103 for this experiment. [...] The original image (displayed in the top left panel) was first transformed into k-space using the Fourier transformation. Then 50% of the data there was taken using a cartesian sampling mask, displayed in the original image. This undersampled data was then corrupted by an additive zero-mean white Gaussian noise with standard deviation 1 to form f in (18).