A Nonconvex Approach for Exact and Efficient Multichannel Sparse Blind Deconvolution
Authors: Qing Qu, Xiao Li, Zhihui Zhu
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical results are corroborated by numerical experiments, which demonstrate superior performance of the proposed approach over the previous methods on both synthetic and real datasets. |
| Researcher Affiliation | Academia | Qing Qu New York University qq213@nyu.edu Xiao Li Chinese University of Hong Kong xli@ee.cuhk.edu.hk Zhihui Zhu Johns Hopkins University zzhu29@jhu.edu ZZ is also with the Department of Electrical & Computer Engineering, University of Denver. |
| Pseudocode | No | The paper describes the algorithms (RGD, subgradient method) using mathematical equations (13, 15) and text, but it does not include a formally labeled 'Algorithm' or 'Pseudocode' block. |
| Open Source Code | No | The paper states: 'The full version [31] of this work can be found at https://arxiv.org/abs/1908.10776.' This links to the paper's preprint on arXiv, not to source code for the methodology. |
| Open Datasets | Yes | We test our algorithms on this task, by using p 1000 frames obtained from a standard dataset17. Available at http://bigwww.epfl.ch/smlm/datasets/index.html?p=tubulin-conjal647. |
| Dataset Splits | No | The paper describes generating synthetic data and using a real dataset of 1000 frames, but it does not specify any training, validation, or test dataset splits or cross-validation methodology. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'CVX [46]' for solving convex problems and 'FFT' for efficient implementation, but it does not specify version numbers for any software dependencies. |
| Experiment Setup | No | The paper mentions aspects of the experimental setup such as random initialization and line-search for step size, and parameters for synthetic data generation (p, n, θ). However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed optimizer settings. |