Online Deep Equilibrium Learning for Regularization by Denoising

Authors: Jiaming Liu, Xiaojian Xu, Weijie Gan, shirin shoushtari, Ulugbek Kamilov

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our numerical results suggest the potential improvements in training/testing complexity due to ODER on three distinct imaging applications.
Researcher Affiliation Academia Jiaming Liu Washington University in St. Louis jiaming.liu@wustl.edu Xiaojian Xu Washington University in St. Louis xiaojianxu@wustl.edu Weijie Gan Washington University in St. Louis weijie.gan@wustl.edu Shirin Shoushtari Washington University in St. Louis s.shirin@wustl.edu Ulugbek S. Kamilov Washington University in St. Louis kamilov@wustl.edu
Pseudocode No The paper describes the algorithms in text, for example, the forward and backward passes, but does not provide a formal pseudocode block or algorithm listing.
Open Source Code Yes The code for our numerical evaluation is available at: https://github.com/wustl-cig/ODER.
Open Datasets Yes In the simulation, we randomly extracted and cropped 400 slices of 416 416 images for training, 28 images for validation and 56 images for testing from Brecahad database [85]. We consider simulated data obtained from the clinically realistic CT images provided by Mayo Clinic for the low dose CT grand challenge [87]. The first dataset [28] provides 800 slices of 256 256 images for training and 50 slices for testing. The second dataset [91] contains a randomly selected 400 volumes of 320 320 10 images for training, and 32 volumes for testing.
Dataset Splits Yes In the simulation, we randomly extracted and cropped 400 slices of 416 416 images for training, 28 images for validation and 56 images for testing from Brecahad database [85].
Hardware Specification No The paper states, '3. (d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] In the Supplement.' However, the provided text does not contain these specific details within the main body.
Software Dependencies No The paper mentions software like 'U-Net', 'Dn CNN', 'DRUNet', 'PyTorch implementation of Radon and IRadon 2 transform', 'Nesterov acceleration', 'Anderson acceleration', 'Adam', 'fminbound in the scipy.optimize toolbox', and 'Sig Py', but it does not provide specific version numbers for these software components or libraries.
Experiment Setup Yes During the training of both ODER and RED (DEQ), we use the Nesterov acceleration [80] for the forward pass and Anderson acceleration [83] for the backward pass. We also adopt the stopping criterion from [40,84] by setting residual tolerance to 10 3 for both forward and backward iterations (see supplement for additional details).