A General Efficient Hyperparameter-Free Algorithm for Convolutional Sparse Learning
Authors: Zheng Xu, Junzhou Huang
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments confirm the superior performance of our general algorithm in various convolutional sparse models, even better than some application-specialistic algorithms. |
| Researcher Affiliation | Academia | Zheng Xu, Junzhou Huang Department of Computer Science and Engineering The University of Texas at Arlington |
| Pseudocode | Yes | Algorithm 1 Primal-Dual Algorithm for Convolutional Sparsity Let ζ = m i=1 ki 2 1, L is a Lipschitz smooth constant of f. Choose β(0) X, μ(0) = (μ(0) 1 , μ(0) 2 , . . . , μ(0) m ) U = U1 U2 Um. Iterate: for t = 0, 1, 2, . . . Update primal variable: β(t+1) = prox g L+ζ β(t) 1/(L + ζ)[ f(β(t)) + CH [m]μ(t)] . Update dual variable: μ(t+1) = ΠB (μ(t) + (1/ζ)C[m](2β(t+1) β(t))). |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | No | In this experiment, we hereby set X Rn p as a normal Gaussian matrix, and the smallest Lipschitz smooth constant of f(β) has been estimated as O(( n + p)2) according to (Rudelson and Vershynin 2010). β Rp is randomly generated as ground truth. ... We compare our method with SLEP (Liu, Ji, and Ye 2009)3, Chambolle & Pock (hereafter C & P) (Chambolle and Pock 2014), CVX (Grant and Boyd 2014) and f GFL (Xin et al. 2014). ... Joint Total Variation and Nuclear Norm Regularization ... It is written as the following optimization problems, min β 1 2 Xβ y 2 F + λ1 β + λ2 β 1. (14) Here, β has two spatial dimensions (i.e., a matrix), with X serving as a linear bounded subsampling operator. |
| Dataset Splits | No | The paper mentions numerical experiments but does not explicitly state the training, validation, or test dataset splits. |
| Hardware Specification | Yes | All the experiments in this section are conducted on a desktop computer with Intel Core i7-4770 CPU and 16 gigabyte RAM. |
| Software Dependencies | No | All methods are evaluated in MATLAB 2013b, Windows 7 Enterprise. |
| Experiment Setup | Yes | In this experiment, we hereby set X Rn p as a normal Gaussian matrix, and the smallest Lipschitz smooth constant of f(β) has been estimated as O(( n + p)2) according to (Rudelson and Vershynin 2010). β Rp is randomly generated as ground truth. We use the exact same parameter setting as in (Liu, Yuan, and Ye 2010), i.e., λ1 = 0.001, λ2 = 0.01. ... The parameter setting is λ1 = 100, λ2 = 1, following the same setting in (Huang et al. 2011). |