Near-Exact Recovery for Tomographic Inverse Problems via Deep Learning

Authors: Martin Genzel, Ingo Gühring, Jan Macdonald, Maximilian März

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that an iterative end-to-end network scheme enables reconstructions close to numerical precision, comparable to classical compressed sensing strategies. Our results build on our winning submission to the recent AAPM DL-Sparse-View CT Challenge. Its goal was to identify the state-of-the-art in solving the sparse-view CT inverse problem with datadriven techniques.
Researcher Affiliation Academia 1Helmholtz-Zentrum Berlin f ur Materialien und Energie, Germany (work done while at Utrecht University, Netherlands) 2Technical University Berlin, Germany.
Pseudocode No No explicit pseudocode or algorithm blocks labeled as such were found. The methodology is described through text and diagrams (e.g., Figure 3 illustrates the iterative scheme), but not in a structured pseudocode format.
Open Source Code Yes 1Our code is available at https://github.com/jmaces/aapm-ct-challenge.
Open Datasets Yes The AAPM challenge data is similar to the setting of Sidky et al. (2021a)... The provided training set consisted of 4000 tuples of phantom images... For an evaluation of our method on the Lo Do Pa B CT dataset, see Appendix C. ... we have applied it to the low-dose parallel beam (Lo Do Pa B) CT dataset (Leuschner et al., 2021).
Dataset Splits Yes The provided training set consisted of 4000 tuples of phantom images... A test set of 100 pairs... After the competition period, the challenge organizer has provided us with 10000 additional test samples... Note that we report the RMSE on a subset of 125 images from the training set used for validation. ...each blue point corresponds to one image in the Lo Do Pa B CT validation set (3522 images).
Hardware Specification No No specific hardware details (such as GPU/CPU models, memory, or cloud computing instance types) used for the experiments are mentioned in the paper.
Software Dependencies No The paper mentions software like Py Torch and the Adam optimizer, but does not provide specific version numbers for these or any other software dependencies, which would be required for reproducibility. For example, it states "We have implemented a discrete fanbeam transform from scratch in Py Torch" without a version.
Experiment Setup Yes This minimization problem is tackled by 400 epochs of mini-batch stochastic gradient descent and the Adam optimizer (Kingma & Ba, 2014) with initial learning rate 0.0002 and batch size 4." and "We start by training an It Net4 (with weight sharing) for 500 epochs of mini-batch stochastic gradient descent and Adam with an initial learning rate of 8 10 5 and a batch size of 2 (restarting Adam after 250 epochs)." Also provides details on λ initialization, UNet-parameters initialization, group normalization, and memory channels (cmem = 5).