Learning step sizes for unfolded sparse coding

Authors: Pierre Ablin, Thomas Moreau, Mathurin Massias, Alexandre Gramfort

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Numerical Experiments This section provides numerical arguments to compare SLISTA to LISTA and ISTA. All the experiments were run using Python [Python Software Foundation, 2017] and pytorch [Paszke et al., 2017]. The code to reproduce the figures is available online2. Network comparisons We compare the proposed approach SLISTA to state-of-the-art learned methods LISTA [Chen et al., 2018] and ALISTA [Liu et al., 2019] on synthetic and semi-real cases. ... Figure 6 shows the test curves for different levels of regularization λ = 0.1 and 0.8.
Researcher Affiliation Academia Pierre Ablin , Thomas Moreau , Mathurin Massias, Alexandre Gramfort Inria CEA Université Paris-Saclay {pierre.ablin,thomas.moreau,mathurin.massias,alexandre.gramfort}@inria.fr
Pseudocode Yes Algorithm 1: Oracle-ISTA (OISTA) with larger step sizes
Open Source Code Yes The code to reproduce the figures is available online2. 2 The code can be found at https://github.com/tom Moral/adopty
Open Datasets Yes For the semi-real case, we used the digits dataset from scikit-learn [Pedregosa et al., 2011]
Dataset Splits No The networks are trained by minimizing the empirical loss L (15) on a training set of size Ntrain = 10, 000 and we report the loss on a test set of size Ntest = 10, 000 . No explicit mention of a validation set or its size is made.
Hardware Specification No The paper does not specify any hardware used for the experiments, such as CPU or GPU models.
Software Dependencies No All the experiments were run using Python [Python Software Foundation, 2017] and pytorch [Paszke et al., 2017]. The paper mentions software names but does not provide specific version numbers for them, which is required for reproducibility.
Experiment Setup No The paper states: 'Further details on training are in Appendix D.' This indicates that specific experimental setup details, such as hyperparameters or optimizer settings, are not provided in the main text.