Prior Image-Constrained Reconstruction using Style-Based Generative Models
Authors: Varun A Kelkar, Mark Anastasio
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments demonstrating the superior performance of our approach as compared to related methods are presented. The numerical studies were split into three parts (1) inverse-crime study, where the object was directly sampled from the Style GAN2 and measurements were simulated using a Gaussian forward model, (2) face image study, where real face images were used to simulate noisy measurements using a Gaussian forward model, and (3) MR image study, where real brain MR images were used to simulate stylized undersampled MRI measurements with noise. |
| Researcher Affiliation | Academia | 1University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA. |
| Pseudocode | Yes | Algorithm 1 Projected Adam algorithm for minimizing the objective in Eq. (9). |
| Open Source Code | Yes | The Tensorflow/python implementation of the reconstruction methods can be found at https://github.com/comp-imaging-sci/pic-recon |
| Open Datasets | Yes | For the inverse-crime study, Style GAN2 was trained on a composite brain MR image dataset consisting of a total of 200676 T1 and T2 weighted images of size 256 256 from the fast MRI initiative database (Zbontar et al., 2018) and 866 images from the brain tumor progression dataset (Schmainda & Prah, 2018). For the face image study, a Style GAN2 with an output image size of 128 128 3 was trained on images from the Flickr-Faces-HQ (FFHQ) dataset (Karras et al., 2019). For the MR image study, Style GAN2 was trained on a composite brain MR image dataset consisting of 164741 T1 and T2 weighted images from the fast MRI database, 686 images from the Brain tumor progression dataset, 2206 T1 and T2 weighted images from the TCIA-GBM dataset (Scarpace et al., 2016), and 36978 T2 weighted images from the OASIS-3 dataset (La Montagne et al., 2019). |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, and test sets. It describes different datasets for different studies, and for each, it mentions how reconstruction performance was evaluated on a dataset, which serves as the test data. |
| Hardware Specification | Yes | The networks were trained using Tensorflow 1.14/Python (Abadi et al., 2015) on an Intel Xeon E5-2620v4 CPU @ 2.1 GHz and four Nvidia TITAN X graphics processing units (GPUs). |
| Software Dependencies | Yes | The networks were trained using Tensorflow 1.14/Python (Abadi et al., 2015) |
| Experiment Setup | Yes | Input: Measurements g, prior image latent w(PI), Regularization parameters p1, p2, λ, maximum iterations niter. L(w; λ) : Objective function from Eq. (9). Initialize Adam optimizer parameters (α, β1, β2). (Default parameters were used). The regularization parameters for all the methods were tuned using either a line search or a grid search depending upon the number of regularization parameters, and the setting giving the lowest ensemble root mean-squared error (RMSE) was chosen. |