Geometry and Symmetry in Short-and-Sparse Deconvolution

Authors: Han-Wen Kuo, Yenson Lau, Yuqian Zhang, John Wright

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Experiments We demonstrate that the tradeoffs between the motif length p0 and sparsity rate θ produce a transition region for successful Sa S deconvolution under generic choices of a0 and x0. For fixed values of θ ∈ [10−3, 10−2] and p0 ∈ [103, 104], we draw 50 instances of synthetic data by choosing a0 ∼ Unif(Sp0−1) and x0 ∈ Rn with x0 i.i.d. BG(θ) where n = 5 × 105. Note that choosing a0 this way implies µ(a0) ∼ 1/√p0. For each instance, we recover a0 and x0 from y = a0 ∗ x0 by minimizing problem (2.5). For ease of computation, we modify Algorithm 1 by replacing curvilinear search with accelerated Riemannian gradient descent method (See appendix M). In Figure 7, we say the local minimizer amin is sufficiently close to a solution of Sa S deconvolution problem, if success(amin, ; a0) := {maxℓ|h sℓ[a0], amin i| > 0.95} .
Researcher Affiliation Academia 1Data Science Institute, Columbia University, New York City, NY, USA 2Department of Electrical Engineering, Columbia University, New York City, NY, USA 3Department of Computer Science, Cornell University, Ithaca, NY, USA 4Department of Applied Math and Applied Physics, Columbia University, New York City, NY, USA.
Pseudocode Yes Algorithm 1 Short and Sparse Deconvolution input Observation y, motif length p0, sparsity θ, shiftcoherence µ, and curvature threshold ηv. Minimization: Initialize a(0) ← PSp−1 ϕρ(PSp−1[0p0−1; y0; ; yp0−1; 0p0−1]), λ = 0.1/√p0θ10and δ > 0 in ϕρ. for k = 1 to K1 do a(k+1) ← PSp−1[a(k) − tg(k) − t2v(k)] Here, g(k) is the Riemannian gradient; v(k) is the eigenvector of smallest Riemannian Hessian eigenvalue if less then ηv with h v(k), g(k) i ≤ 0, otherwise let v(k) = 0; and t ∈ (0, 0.1/nθ] satisfies ϕρ(a(k+1)) < ϕρ(a(k)) − 1 2 t k g(k) k22 − 1 4 t4ηv k v(k) k22 end for Obtain a near local minimizer a ← a(K1). Refinement: Initialize a(0) ← a, λ(0) ← 10(pθ + log n)(µ + 1/p) and I(0) ← Sλ(0) [supp(y − a)]. for k = 1 to K2 do x(k+1) ← argminx 1 2 k a(k) ∗ x − y k22 + λ(k)P i6∈I(k) |xi| , a(k+1) ← PSp−1 argmina 1 2 k a ∗ x(k+1) − y k22 λ(k+1) ← λ(k)/2, I(k+1) ← supp(x(k+1)) end for output (ba, bx) ← (a(K2), x(K2))
Open Source Code No The paper does not contain an unambiguous statement about releasing the source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets No For fixed values of θ ∈ [10−3, 10−2] and p0 ∈ [103, 104], we draw 50 instances of synthetic data by choosing a0 ∼ Unif(Sp0−1) and x0 ∈ Rn with x0 i.i.d. BG(θ) where n = 5 × 105.
Dataset Splits No The paper describes generating synthetic data for experiments but does not provide specific train/validation/test dataset split information (e.g., percentages, sample counts, or references to standard predefined splits).
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 'accelerated Riemannian gradient descent method' but does not provide specific ancillary software details with version numbers (e.g., programming languages, libraries, or solvers).
Experiment Setup Yes For fixed values of θ ∈ [10−3, 10−2] and p0 ∈ [103, 104], we draw 50 instances of synthetic data by choosing a0 ∼ Unif(Sp0−1) and x0 ∈ Rn with x0 i.i.d. BG(θ) where n = 5 × 105. Note that choosing a0 this way implies µ(a0) ∼ 1/√p0. For each instance, we recover a0 and x0 from y = a0 ∗ x0 by minimizing problem (2.5). For ease of computation, we modify Algorithm 1 by replacing curvilinear search with accelerated Riemannian gradient descent method (See appendix M). Also, from Algorithm 1: 'λ = 0.1/√p0θ and δ > 0 in ϕρ'.