Convex Regularization behind Neural Reconstruction
Authors: Arda Sahiner, Morteza Mardani, Batu Ozturkler, Mert Pilanci, John M. Pauly
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A range of experiments with MNIST and fast MRI datasets confirm the efficacy of the dual network optimization problem. |
| Researcher Affiliation | Academia | Department of Electrical Engineering Stanford University {sahiner, morteza, ozt, pilanci, pauly}@stanford.edu |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions the use of the PyTorch deep learning library but does not provide any explicit statement or link for the open-source code of their own methodology. |
| Open Datasets | Yes | We use a subset of the MNIST handwritten digits (Le Cun et al., 1998). ... we use the fast MRI dataset (Zbontar et al., 2018), a benchmark dataset for evaluating deep-learning based MRI reconstruction methods. |
| Dataset Splits | No | The paper specifies training and test sets but does not explicitly mention a validation set or specific split percentages for training, validation, and test. |
| Hardware Specification | Yes | We train both the primal and the dual network in a distributed fashion on a NVIDIA Ge Force GTX 1080 Ti GPU and NVIDIA Titan X GPU. |
| Software Dependencies | No | The paper mentions using the PyTorch deep learning library and the Sig Py python package but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | All networks were trained with an Adam optimizer, with β1 = 0.9, β2 = 0.999, and ϵ = 10 8. ... For the primal network, we use 512 filters, whereas for the dual network, we randomly sample 8,000 sign patterns... For the primal network, we train with a learning rate of µ = 10 1, whereas for the dual network we use a learning rate of µ = 10 3. We use a batch size of 25 for all cases. For the weight-decay parameter we use a value of β = 10 5. |