Provable Learning-based Algorithm For Sparse Recovery

Authors: Xinshi Chen, Haoran Sun, Le Song

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7 EXPERIMENTS
Researcher Affiliation Collaboration Xinshi Chen & Haoran Sun School of Mathematics Georgia Institute of Technology Atlanta, USA {xinshi.chen,haoransun}@gatech.edu Le Song Machine Learning Department MBZUAI & Bio Map UAE & China songle@biomap.com
Pseudocode Yes Algorithm 1: PLISAθ architecture; Algorithm 2: Layers in each block; Algorithm 3: The Approximate Path Following Method; Algorithm 4: The Proximal Gradient Method; Algorithm 5: The Line Search Method
Open Source Code No The paper does not provide any explicit statement or link to open-source code for the methodology (PLISA) described in the paper.
Open Datasets Yes In sparse precision matrix estimation problem, we follow the setting in Guillot et al. (2012). ... (1) Gene a single-cell gene expression dataset that contains expression levels of 45 transcription factors measured at different time-points. We follow Ollier & Viallon (2017) to pick the transcription factor, EGR2, as the response variable... (2) Parkinsons a disease dataset that contains symptom scores of Parkinson for different patients. ... (Tsanas et al., 2009)... (3) School an examination score dataset of students from 139 secondary schools in London. ... from Malsar package (Zhou et al., 2011).
Dataset Splits Yes In all the experiments, 2000 such problems are used for training, 200 such problems are used for validation, and 100 such problems are used for test.
Hardware Specification Yes The evaluation is performed on a server with CPU: Intel(R) Xeon(R) Silver 4116 CPU @ 2.10GHz, GPU: Nvidia GTX 2080TI, Memory 264G, in single thread.
Software Dependencies No We use optimizer Adam (Kingma & Ba, 2014). For sparse linear recovery problem... We use the implementation in the sklearn package. (Pedregosa et al., 2011).
Experiment Setup Yes We use optimizer Adam (Kingma & Ba, 2014). For sparse linear recovery problem, we use batch size 10, we train 500 epochs with learning rate 1e4 and select the model based on the l2 loss on valid data. For sparse precision matrix estimation problem, we use batch size 40, we train 200 epochs with learning rate 1e-3 and select the model based on Frobenius loss on valid data.