Differentiable Gaussianization Layers for Inverse Problems Regularized by Deep Generative Models

Authors: Dongzhuo Li

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our technique on three inversion tasks: compressive-sensing MRI, image deblurring, and eikonal tomography (a nonlinear PDE-constrained inverse problem) using two representative deep generative models: Style GAN2 and Glow. Our approach achieves state-of-the-art performance in terms of accuracy and consistency. and 4 EXPERIMENTS We consider three representative inversion problems for testing: compressive sensing MRI, image deblurring, and eikonal traveltime tomography.
Researcher Affiliation Industry Dongzhuo Li Exxon Mobil Technology & Engineering Company dongzhuo.li@exxonmobil.com
Pseudocode Yes Algorithm 1: ICA Layer and Algorithm 2: Power Transformation Layer and Algorithm 3: Lambert W FX Layer with the Iterative Generalized Method of Moments (IGMM).
Open Source Code Yes The implementation is available here.
Open Datasets Yes For MRI and eikonal tomography, we used synthetic brain images as inversion targets and used the pre-trained Style GAN2 weights from Kelkar & Anastasio (2021) (trained on data from the databases of fast MRI (Zbontar et al., 2018; Knoll et al., 2020), TCIA-GBM (Scarpace et al., 2016), and OASIS-3 (La Montagne et al., 2019)) for regularization. We used the test split of the Celeb A-HQ dataset (Karras et al., 2018) for deblurring, and the DGM is a Glow network trained on the training split.
Dataset Splits Yes We split the 30000 images from Celeb A-HQ into the subsets of training (24183 images), validation (2993 images), and testing (2824 images) following the original splits from Celeb A (Liu et al., 2015).
Hardware Specification Yes All training was conducted using 8 32 GB Nvidia V100 GPUs with a batch size of 64.
Software Dependencies No The paper mentions software like SciPy and implies PyTorch (via `torch.roll`), but does not provide specific version numbers for these or other key software dependencies.
Experiment Setup Yes We used the LBFGS (Nocedal & Wright, 2006) optimizer in all experiments except TV, noise regularization, and CSGM-w, which use FISTA (Beck & Teboulle, 2009) or ADAM (Kingma & Ba, 2015). The temperature was set to 1.0 for Style GAN2 and 0.7 for Glow. For the hyper-parameters of the Glow networks, we used 4 multi-scale levels and 32 flow-steps, and we only used additive coupling layers. All training was conducted using 8 32 GB Nvidia V100 GPUs with a batch size of 64. We used the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10 4, as well as β1 = 0.9 and β2 = 0.99.