Invert to Learn to Invert

Authors: Patrick Putzky, Max Welling

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments on a public data set, we demonstrate that these deeper, and thus more expressive, networks perform state-of-the-art image reconstruction.
Researcher Affiliation Collaboration Patrick Putzky Amlab, University of Amsterdam (UvA), Amsterdam, The Netherlands Max-Planck-Institute for Intelligent Systems (MPI-IS), Tübingen, Germany patrick.putzky@googlemail.com Max Welling Amlab, University of Amsterdam (UvA), Amsterdam, The Netherlands Canadian Institute for Advanced Research (CIFAR), Canada welling.max@googlemail.com
Pseudocode No The paper describes algorithmic steps in prose and through equations, but does not provide a formal pseudocode or algorithm block.
Open Source Code Yes An implementation of our approach can be found at https://github.com/pputzky/invertible_rim.
Open Datasets Yes We evaluate our approach on a public data set for accelerated MRI reconstruction that is part of the so called fast MRI challenge [28]. All of our experiments were run on the single-coil data from Zbontar et al. [28].
Dataset Splits Yes The data set consists of 973 volumes or 34, 742 slices in the training set, 199 volumes or 7, 135 slices in the validation set, and 108 volumes or 3, 903 slices in the test set.
Hardware Specification No The paper mentions 'a 16GB GPU' as available memory but does not provide specific models (e.g., NVIDIA V100, RTX 3090, CPU type) used for the experiments.
Software Dependencies No The paper mentions deep learning concepts and uses 'weight normalisation' and 'gated recurrent units (GRU)' which implies standard libraries, but no specific software versions (e.g., PyTorch 1.9, TensorFlow 2.x) are provided.
Experiment Setup Yes All iterative models were trained on 8 inference steps. ... The models consisted of 10 invertible layers with a fanned downsampling structure at each time step... The number of channels in the machines state (η, s) to be 64. ... We chose a sub-sampling factor of 0.01... We use weight normalisation [23] for all convolutional weights in the block and we disable the bias term for the last convolution.