Learned D-AMP: Principled Neural Network based Compressive Image Recovery
Authors: Chris Metzler, Ali Mousavi, Richard Baraniuk
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50 faster than BM3D-AMP and hundreds of times faster than NLR-CS. |
| Researcher Affiliation | Academia | Christopher A. Metzler Rice University chris.metzler@rice.edu Ali Mousavi Rice University ali.mousavi@rice.edu Richard G. Baraniuk Rice University richb@rice.edu |
| Pseudocode | No | The paper provides algorithmic steps as numbered equations (e.g., D-IT Algorithm (3), D-AMP Algorithm (4), LDAMP Neural Network (5)), but these are not presented within a formally labeled "Pseudocode" or "Algorithm" block or figure. |
| Open Source Code | Yes | Public implementations of both versions of the algorithm are available at https://github.com/ricedsp/D-AMP_Toolbox. |
| Open Datasets | Yes | Training images were pulled from Berkeley s BSD-500 dataset [46]. |
| Dataset Splits | Yes | From this dataset, we used 400 images for training, 50 for validation, and 50 for testing. For the results presented in Section 3, the training images were cropped, rescaled, flipped, and rotated to form a set of 204,800 overlapping 40 40 patches. The validation images were cropped to form 1,000 non-overlapping 40 40 patches. We used 256 non-overlapping 40 40 patches for test. |
| Hardware Specification | Yes | Training generally took between 3 and 5 hours per denoiser on an Nvidia Pascal Titan X. |
| Software Dependencies | No | We implemented LDAMP and LDIT, using the Dn CNN denoiser [39], in both Tensor Flow and Mat Convnet [47], which is a toolbox for Matlab. The paper names software like TensorFlow and MatConvNet but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | We trained all the networks using the Adam optimizer [48] with a training rate of 0.001, which we dropped to 0.0001 and then 0.00001 when the validation error stopped improving. We used mini-batches of 32 to 256 patches, depending on network size and memory usage. |