Neurally Augmented ALISTA

Authors: Freya Behrens, Jonathan Sauder, Peter Jung

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate NA-ALISTA in a sparse reconstruction task and compare it against ALISTA (Liu et al., 2019), ALISTA-AT (Kim & Park, 2020), AGLISTA (Wu et al., 2020), as well as the classical ISTA (Daubechies et al., 2003) and FISTA (Beck & Teboulle, 2009). To emphasize a fair and reproducible comparison between the models, the code for all experiments listed is available on Git Hub.
Researcher Affiliation Academia Freya Behrens1, , Jonathan Sauder1, , Peter Jung1,2, Communications and Information Theory Chair, Technical University of Berlin1, Data Science in Earth Observation, Technical University of Munich2
Pseudocode Yes Algorithm 1: Neurally Augmented ALISTA
Open Source Code Yes To emphasize a fair and reproducible comparison between the models, the code for all experiments listed is available on Git Hub 2. https://github.com/feeds/na-alista
Open Datasets No The paper describes generating synthetic data based on random variables and distributions, and using a statistical model for real-world channel estimation. It does not provide concrete access information (link, DOI, specific repository, or formal citation with authors/year) for a pre-existing publicly available dataset.
Dataset Splits No The paper mentions a 'test set of 10000 samples is fixed before training' and 'We train all algorithms for 400 epochs', but does not explicitly state the use or size of a separate validation set or the specific splits for training, validation, and testing.
Hardware Specification Yes Computations were run on a system with a NVIDIA Tesla P100 GPU and Intel(R) Xeon(R), with the GPU enabled (a) and CPU only (b).
Software Dependencies No The paper mentions the use of 'The Adam optimizer' and 'support selection' but does not provide specific version numbers for any software components or libraries.
Experiment Setup Yes When not otherwise indicated we use the following settings for experiments and algorithms: M = 250, N = 1000, S = 50, K = 16, H = 128, and y = Φx + z with additive white Gaussian noise z with a signal to noise ratio SNR:= E( Φx 2 2)/E( z 2 2) = 40d B. We train all algorithms for 400 epochs, with each epoch containing 50,000 sparse vectors with a batch size of 512.