Straight-Through Meets Sparse Recovery: the Support Exploration Algorithm

Authors: Mimoun Mohamed, Francois Malgouyres, Valentin Emiya, Caroline Chaux

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performances of SEA are compared to those of stateof-the-art algorithms on: 1/ phase-transition synthetic experiments for Gaussian matrices; 2/ spike deconvolution problems; 3/ classification and regression problems for real datasets.
Researcher Affiliation Academia 1Aix Marseille Univ, CNRS, LIS, Marseille, France 2Aix Marseille Univ, CNRS, I2M, Marseille, France 3Institut de Math ematiques de Toulouse ; UMR5219 , Universit e de Toulouse ; CNRS , UPS IMT F-31062 Toulouse Cedex 9, France 4CNRS, IPAL, Singapour. Correspondence to: Mimoun Mohamed <mimoun.mohamed@lis-lab.fr>.
Pseudocode Yes Algorithm 1 Support Exploration Algorithm
Open Source Code Yes The code is available in the git repository of the project 7.
Open Datasets Yes We use the preprocessed public datasets10 provided by (Axiotis & Sviridenko, 2020), following the same preprocessing pipeline: we augment A with an extra column equal to 1 to allow a bias and normalize the columns of A. [Footnote 10: https://drive.google.com/file/d/1RDu2d46q GLI77Azli BQle Ss B5Ww F83TF/view]
Dataset Splits No The paper describes generating data for experiments (e.g., "For each run, the entries of A Rm×n are drawn independently from the standard normal distribution.") and using "preprocessed public datasets" from another work. However, it does not explicitly state training/validation/test splits for these datasets or for the generated data, as requested by the question. It specifies the number of runs and evaluation metrics, but not data partitioning into conventional splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running its experiments.
Software Dependencies No The code is implemented in Python 3 and is available in the git repository of the project 7. For all algorithms, each least-square projection for a fixed support, as in Line 7 of Algorithm 1, is solved using the conjugate gradient descent of SciPy (Virtanen et al., 2020).
Experiment Setup Yes For all algorithms, 256k iterations are performed. The step size of SEA, HTP, and IHT is arbitrarily6 fixed to η = 1.8/L, where L is the spectral radius of A. The columns of A are normalized before solving the problem. The sparse vector x* ∈ Rn is random. Indexes of the support are randomly picked, uniformly without replacement. The non-zero entries of x* are drawn uniformly in [−2, −1] ∪ [1, 2] as in (Elad, 2010). The noise e is drawn uniformly using the same method as described in (Blanchard & Tanner, 2015).