Understanding approximate and unrolled dictionary learning for pattern recovery
Authors: Benoît Malézieux, Thomas Moreau, Matthieu Kowalski
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we apply unrolling on pattern learning in magnetoencephalography (MEG) with the help of a stochastic algorithm and compare the performance to a state-of-the-art method. |
| Researcher Affiliation | Academia | Benoît Malézieux Université Paris-Saclay, Inria, CEA L2S, Université Paris-Saclay CNRS Centrale Supelec benoit.malezieux@inria.fr Thomas Moreau Université Paris-Saclay, Inria, CEA Palaiseau, 91120, France thomas.moreau@inria.fr Matthieu Kowalski L2S, Université Paris-Saclay CNRS Centrale Supelec Gif-sur-Yvette, 91190, France matthieu.kowalski@universite-paris-saclay.fr |
| Pseudocode | Yes | Algorithm 1 ISTA |
| Open Source Code | Yes | Code is available at https://github.com/bmalezieux/unrolled_dl. |
| Open Datasets | Yes | The data are generated from a random Gaussian dictionary D of size 30 50, with Bernoulli-Gaussian sparse codes z (sparsity 0.3, σ2 z = 1), and Gaussian noise (σ2 noise = 0.1) more details in Appendix A. ... We reproduce the multivariate CSC experiments of alphacsc3 (Dupr e la Tour et al., 2018) on the dataset sample of MNE (Gramfort et al., 2013) 6 minutes of recordings with 204 channels sampled at 150Hz with visual and audio stimuli. |
| Dataset Splits | No | The paper does not explicitly state specific train/validation/test splits, percentages, or sample counts for its experiments. |
| Hardware Specification | Yes | The computations have been performed on a GPU NVIDIA Tesla V100-DGXS 32GB using Py Torch (Paszke et al., 2019). |
| Software Dependencies | No | The paper mentions Py Torch and K3D-Jupyter but does not specify their version numbers. |
| Experiment Setup | Yes | The data are generated from a random Gaussian dictionary D of size 30 50, with Bernoulli-Gaussian sparse codes z (sparsity 0.3, σ2 z = 1), and Gaussian noise (σ2 noise = 0.1) more details in Appendix A. ... We optimize with projected gradient descent combined to a line search... We learn a dictionary composed of 128 atoms on 10 10 patches with FISTA and λ = 0.1 in all cases. ... with 20 unrolled iterations of FISTA and λ = 0.1. ... with 30 unrolled iterations and 100 iterations with batch size 20. ... 40 atoms of 1 second on mini batches of 10 seconds, with 30 unrolled iterations of FISTA, λscaled = 0.3, and 10 epochs with 10 iterations per epoch. |