Structure-Blind Signal Recovery

Authors: Dmitry Ostrovsky, Zaid Harchaoui, Anatoli Juditsky, Arkadi S. Nemirovski

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present preliminary results on simulated data of the proposed adaptive signal recovery methods in several application scenarios. We compare the performance of the penalized ℓ2-recovery of Sec. 3 to that of the Lasso recovery of [1] in signal and image denoising problems. Implementation details for the penalized ℓ2-recovery are given in Sec. 6. Discussion of the discretization approach underlying the competing Lasso method can be found in [1, Sec. 3.6]. We follow the same methodology in both signal and image denoising experiments. For each level of the signal-to-noise ratio SNR {1, 2, 4, 8, 16}, we perform N Monte-Carlo trials. In each trial, we generate a random signal x on a regular grid with n points, corrupted by the i.i.d. Gaussian noise of variance σ2. The signal is normalized: x 2 = 1 so SNR 1 = σ n. We set the regularization penalty in each method as follows. For penalized ℓ2-recovery (8), we use λ = 2σ2 log[63n/α] with α = 0.1. For Lasso [1], we use the common setting λ = σ 2 log n. We report experimental results by plotting the ℓ2-error bx x 2, averaged over N Monte-Carlo trials, versus the inverse of the signal-to-noise ratio SNR 1.
Researcher Affiliation Academia LJK, University of Grenoble Alpes, 700 Avenue Centrale, 38401 Domaine Universitaire de Saint-Martind H eres, France. University of Washington, Seattle, WA 98195, USA. Georgia Institute of Technology, Atlanta, GA 30332, USA.
Pseudocode No The paper discusses solving optimization problems using methods like Mirror-Prox and Nesterov’s accelerated gradient algorithms, but it does not include any pseudocode or algorithm blocks.
Open Source Code No The paper states: 'Extensive theoretical discussions and numerical experiments will be presented in the follow-up journal paper.' It does not provide a link to source code or explicitly state its release.
Open Datasets No The paper describes generating synthetic data for its experiments: 'In each trial, we generate a random signal x on a regular grid with n points, corrupted by the i.i.d. Gaussian noise of variance σ2.' This is not a publicly available or open dataset.
Dataset Splits No The paper mentions performing 'N Monte-Carlo trials' and generating random signals for denoising experiments, but it does not specify train, validation, or test dataset splits with percentages or counts.
Hardware Specification No The paper does not specify any hardware used for running the experiments (e.g., CPU, GPU, memory, or specific computing clusters).
Software Dependencies No The paper mentions using 'Mirror-Prox and Nesterov s accelerated gradient algorithms' for solving optimization problems and compares to 'Lasso [1]', but it does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes We set the regularization penalty in each method as follows. For penalized ℓ2-recovery (8), we use λ = 2σ2 log[63n/α] with α = 0.1. For Lasso [1], we use the common setting λ = σ 2 log n. We present additional numerical illustrations in the supplementary material. For the penalized ℓ2-recovery, we implement the blockwise denoising strategy (see Appendix for the implementation details) with just one block for the entire image.