Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Statistical Component Separation for Targeted Signal Recovery in Noisy Mixtures
Authors: Bruno Régaldo-Saint Blancard, Michael Eickenberg
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Then, we apply it in an image denoising context employing 1) wavelet-based descriptors, 2) Conv Net-based descriptors on astrophysics and Image Net data. In the case of 1), we show that our method better recovers the descriptors of the target data than a standard denoising method in most situations. Additionally, despite not constructed for this purpose, it performs surprisingly well in terms of peak signal-to-noise ratio on full signal reconstruction. |
| Researcher Affiliation | Industry | Bruno Régaldo-Saint Blancard EMAIL Center for Computational Mathematics Flatiron Institute New York, NY 10010 Michael Eickenberg EMAIL Center for Computational Mathematics Flatiron Institute New York, NY 10010 |
| Pseudocode | Yes | Algorithm 1 Vanilla Statistical Component Separation Inputs: y, p(ϵ0), Q, T, gradient-based optimizer (e.g. LBFGS) Initialize: ˆx0 = y for i = 1 . . . T do sample ϵ1, . . . , ϵQ p(ϵ0) ˆL(ˆx0) = PQ k=1 ϕ(ˆx0 + ϵk) ϕ(y) 2 /Q ˆx0 one_step_optim h ˆx0, ˆL(ˆx0) i end for return ˆx0 |
| Open Source Code | Yes | Codes and data are provided on Git Hub.2 2https://github.com/bregaldo/stat_comp_sep. |
| Open Datasets | Yes | We consider three different types of 256 256 images corresponding to a simulation of the emission of dust grains in the interstellar medium (the dust image), a simulation of the large-scale structure of the Universe (Villaescusa-Navarro et al., 2020) (the LSS image), and randomly selected images from the Image Net dataset (Deng et al., 2009) (the Image Net images). |
| Dataset Splits | No | The paper describes experiments involving different noise realizations of images but does not specify traditional train/test/validation dataset splits, as it focuses on denoising. |
| Hardware Specification | Yes | Each optimization takes 40 s with a GPU-accelerated code on a A100 GPU. |
| Software Dependencies | No | The paper mentions using a 'gradient-based optimizer (e.g. LBFGS)' and refers to the 'VGG-19_BN network' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We proceed similarly in the following, and fix the number of iterations to T = 30 and the batch size to Q = 100.5 ... We vary, for the colored noises, the amplitude σ of the noise considering 10 different levels ranging from 0.1 to 2.14 (logarithmically spaced) in unit of the standard deviation of x, and for the crosses noises, the density of crosses ρ considering 10 different values ranging from 0.001 to 0.063 (logarithmically spaced)6. ... with αi = 1/ P and P = 10σ ... and T = 10 |