Unrolling PALM for Sparse Semi-Blind Source Separation
Authors: Mohammad Fahes, Christophe Kervazo, Jérôme Bobin, Florence Tupin
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the relevance of LPALM in astrophysical multispectral imaging: the algorithm not only needs up to 104 105 times fewer iterations than PALM, but also improves the separation quality, while avoiding the cumbersome hyperparameter and initialization choice of PALM. We further show that LPALM outperforms other unrolled source separation methods in the semi-blind setting. |
| Researcher Affiliation | Academia | Mohammad Fahes1, Christophe Kervazo1, J erˆome Bobin2 & Florence Tupin1 1 LTCI, T el ecom Paris, Institut Polytechnique de Paris, Palaiseau, France 2 CEA Saclay, Gif-sur-Yvette, France |
| Pseudocode | Yes | The LPALM algorithm: Putting together the above updates, the LPALM algorithm then reads as2: Input: X, output: Apred = A(K), Spred = S(K) Initialize: A(0) = 1/ m m n , S(0) = (0)n t for k in 0, ..., K 1 : |
| Open Source Code | Yes | The LPALM algorithm: Putting together the above updates, the LPALM algorithm then reads as2: https://github.com/mfahes/LPALM |
| Open Datasets | Yes | In this article, the mixing matrices come from astrophysical simulations, described in (Picquenot et al., 2019), which have been derived from real astrophysical data: the Cassiopea A supernovae remnant as observed by the X-ray space telescope Chandra chandra.harvard.edu. |
| Dataset Splits | No | The paper mentions that the data is split into 750 training samples and 150 testing samples but does not specify a validation set split or its size in the main text. Although Appendix D.4 mentions 'monitoring the evolution of the validation loss', no explicit split details for a validation set are provided. |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, memory) used for experiments are provided in the paper. |
| Software Dependencies | No | For the implementation, Pytorch library (Paszke et al., 2019) is used. The paper does not specify the version number for Pytorch or any other software. |
| Experiment Setup | Yes | For the training, Adam optimizer (Kingma & Ba, 2014) is used with β1 = 0.9 and β2 = 0.999. [...] The training is done on 100 epochs with a learning rate LR=0.0001. The batch size is 1. [...] The number of layers is K = 25. |