Learning a Compressed Sensing Measurement Matrix via Gradient Unrolling
Authors: Shanshan Wu, Alex Dimakis, Sujay Sanghavi, Felix Yu, Daniel Holtmann-Rice, Dmitry Storcheus, Afshin Rostamizadeh, Sanjiv Kumar
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare the empirical performance of 10 algorithms over 6 sparse datasets (3 synthetic and 3 real). Our experiments show that there is indeed additional structure beyond sparsity in the real datasets; our method is able to discover it and exploit it to create excellent reconstructions with fewer measurements (by a factor of 1.1-3x) compared to the previous state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1Department of Electrical and Computer Engineering, University of Texas at Austin, USA 2Google Research, New York, USA. |
| Pseudocode | No | The paper describes algorithmic steps and equations, but does not present them in a clearly labeled "Pseudocode" or "Algorithm" block. |
| Open Source Code | Yes | We implement ℓ1-AE in Tensorflow. Our code is available online: https://github.com/wushanshan/L1AE. |
| Open Datasets | Yes | Our second dataset Wiki10-31K is a multi-label dataset downloaded from this repository (Bhatia et al., 2017). We only use the label vectors to train our autoencoder. Our third dataset is RCV1(Lewis et al., 2004), a popular text dataset. |
| Dataset Splits | Yes | Table 1. Summary of the datasets. The validation set is used for parameter tuning and early stopping. ... Train / Valid / Test Size |
| Hardware Specification | Yes | A single NVIDIA Quadro P5000 GPU is used in the experiments. |
| Software Dependencies | No | The paper mentions "Tensorflow" and "Gurobi (a commercial optimization solver)" but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We use stochastic gradient descent to train the autoencoder. Before training, every sample is normalized to have unit ℓ2 norm. The parameters are initialized as follows: A Rm d is a random Gaussian matrix with standard deviation 1/ d; β is initialized as 1.0. Other hyperparameters are given in Appendix B. A single NVIDIA Quadro P5000 GPU is used in the experiments. We set the decoder depth T = 10 for most datasets. |