End-to-end reconstruction meets data-driven regularization for inverse problems
Authors: Subhadip Mukherjee, Marcello Carioni, Ozan Öktem, Carola-Bibiane Schönlieb
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our theoretical results on the learned unrolled operator and the regularizer are corroborated by strong experimental evidence for the CT inverse problem, and the illustrative inpainting and denoising examples (see Sec. B in the supplementary document). |
| Researcher Affiliation | Academia | Subhadip Mukherjee 1, Marcello Carioni 1, Ozan Öktem2, and Carola-Bibiane Schönlieb1 1Department of Applied Mathematics and Theoretical Physics, University of Cambridge, UK 2Department of Mathematics, KTH Royal institute of Technology, Sweden |
| Pseudocode | Yes | Algorithm 1 Learning unrolled adversarial regularization (UAR). |
| Open Source Code | Yes | 1Codes at https://github.com/Subhadip-1/unrolling_meets_data_driven_regularization. |
| Open Datasets | Yes | The abdominal CT scans for 10 patients, made publicly available by the Mayo-Clinic for the low-dose CT grand challenge [18], were used in our numerical experiments. |
| Dataset Splits | No | Specifically, 2250 2D slices of size 512 × 512 corresponding to 9 patients were used to train the models, while 128 slices from the remaining one patient were used for evaluation. No explicit validation split information (e.g., percentages or exact counts for a validation set) is provided. |
| Hardware Specification | Yes | Training UAR took approximately three hours per epoch on an NVIDIA Quadro RTX 6000 GPU (with 24 GB of memory). |
| Software Dependencies | No | The paper mentions a 'PyTorch-based implementation' and the use of 'ODL [3]', but it does not specify version numbers for PyTorch, ODL, or other key software dependencies. |
| Experiment Setup | Yes | Algorithm 1: Input: Training data-set {xi}N i=1 πx and yj N j=1 πyδ, initial reconstruction network parameter φ and regularizer parameter θ, batch-size nb = 1, penalty λ = 0.1, gradient penalty λgp = 10.0, Adam optimizer parameters (β1, β2) = (0.50, 0.99). ... with step-size η = 10 4. ... η = 2 10 5. The unrolled network Gφ has 20 layers, with 5 5 filters in both primal and dual spaces... |