Neural Proximal Gradient Descent for Compressive Imaging
Authors: Morteza Mardani, Qingyun Sun, David Donoho, Vardan Papyan, Hatef Monajemi, Shreyas Vasanawala, John Pauly
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are carried out under different settings: (a) reconstructing abdominal MRI of pediatric patients from highly undersampled Fourier-space data and (b) superresolving natural face images. |
| Researcher Affiliation | Academia | Depts. of 1Electrical Eng., 2Radiology, 3Statistics, and 4Mathematics; Stanford University |
| Pseudocode | No | The paper describes the iterative procedure in mathematical equations and prose but does not provide a formal pseudocode block or algorithm box. |
| Open Source Code | Yes | The source code for Tensor Flow implementation is publicly available in the Github page [35]. |
| Open Datasets | Yes | Adopting celeb Faces Attributes Dataset (Celeb A) [40], for training and test we use 10K and 1, 280 images, respectively. |
| Dataset Splits | No | The paper mentions 'train dataset' and 'test dataset' with specific counts, but does not explicitly describe a validation split. |
| Hardware Specification | Yes | Training was performed with Tensor Flow interface on an NVIDIA Titan X Pascal GPU with 12GB RAM. |
| Software Dependencies | No | The paper mentions 'Tensor Flow interface' but does not specify its version number or any other software dependencies with versions. |
| Experiment Setup | Yes | We used the Adam SGD optimizer with the momentum parameter 0.9, mini-batch size 2, and initial learning rate 10 5 that is halved every 10K iterations. For training RNN, we use ℓ2 cost in (P2) with β = 0.75. |