Structured Local Minima in Sparse Blind Deconvolution

Authors: Yuqian Zhang, Han-wen Kuo, John Wright

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments: Properties of a Random Kernel. In our main result, the sparsity rate θ depends on the condition number κ and induced column coherence µ. Figure 3 plots the average values (over 100 independent simulations) of κ and µ for generic unit kernels of varying dimension k = 10, 20, , 1000. [...] Recovery Error of the Proposed Algorithm We present the performance of Algorithm 1 under varying settings. We define the recover error as err = 1 maxτ | a, PS [ι ksτ[f a0]] |, and calculate the average error from 50 independent experiments. The left figure plots the average error when we fix the kernel size k = 50, and vary the dimension m and the sparsity θ of x0.14 The right figure plots the average error when we vary the dimensions k, m of both convolution signals, and set the sparsity as θ = k 2/3. Figure 4: Recovery Error of the Shift Truncated Kernel by Algorithm 1.
Researcher Affiliation Academia Yuqian Zhang, Han-Wen Kuo, John Wright Department of Electrical Engineer and Data Science Institute Columbia University, New York, NY 10027 {yz2409, hk2673, jw2966}@columbia.edu
Pseudocode Yes Algorithm 1 Short and Sparse Blind Deconvolution: Input: Observations y Rm and kernel size k. Output: Recovered Kernel a. 1: Generate random index i [1, m] and set qinit = PS h Y Y T 1/2 yi i . 2: Solve following nonconvex optimization problem with a descent algorithm that escapes saddle point and find a local minimizer q = arg minq Sk 1 ϕ (q) 3: Set a = PS h Y Y T 1/2 q i .
Open Source Code No The paper does not provide any concrete access to source code for the methodology described, nor does it explicitly state that code will be released.
Open Datasets No The paper describes experiments based on simulations and mentions properties of random kernels and sparse coefficients (x0 i.i.d. BG (θ) Rm), but it does not provide concrete access information (link, DOI, repository, or specific citation with authors/year) for any publicly available or open dataset used in its experiments.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) needed to reproduce data partitioning for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments or simulations.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper describes parameters of the synthetic data generation (e.g., kernel size k, sparsity θ, signal length m) and the model, but it does not specify concrete hyperparameter values or system-level training settings (e.g., learning rate, batch size, optimizer) for the descent algorithm used in the experiments.