Unsupervised Learning From Incomplete Measurements for Inverse Problems

Authors: Julián Tachella, Dongdong Chen, Mike Davies

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our theoretical bounds and demonstrate the advantages of the proposed unsupervised loss compared to previous methods via a series of experiments on various imaging inverse problems, such as accelerated magnetic resonance imaging, compressed sensing and image inpainting.
Researcher Affiliation Academia Julián Tachella Laboratoire de Physique CNRS & ENSL Lyon, France julian.tachella@cnrs.fr Dongdong Chen School of Engineering University of Edinburgh Edinburgh, UK d.chen@ed.ac.uk Mike Davies School of Engineering University of Edinburgh Edinburgh, UK mike.davies@ed.ac.uk
Pseudocode No The paper describes algorithms (Dictionary and Subspace Learning, Ambient GAN, Measurement Splitting, Proposed Method) but does not include any explicit pseudocode blocks or algorithm figures.
Open Source Code Yes 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]
Open Datasets Yes Compressed Sensing and Inpainting with MNIST We evaluate the theoretical bounds on the MNIST dataset, based on the well known approximation of its box-counting dimension k 12 [33]. The dataset contains N = 60000 training samples... Inpainting with Celeb A We evaluate the unsupervised methods in Section 5 on the Celeb A dataset [36]... Accelerated MRI with Fast MRI Finally, we consider the Fast MRI dataset [38]...
Dataset Splits Yes The dataset contains N = 60000 training samples... The test set consists of 10000 samples... which is split into 32556 images for training and 32556 images for testing... We used 900 images for training and 74 for testing, which we split across G = 40 operators.
Hardware Specification Yes All our experiments were performed using an internal cluster of 4 NVIDIA RTX 3090 GPUs with a total compute time of approximately 48 hs.
Software Dependencies No The paper mentions architectural choices like "U-Net" and "DCGAN architecture" and that a network has "5 fully connected layers and relu non-linearities," but it does not specify software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes All our experiments were performed using an internal cluster of 4 NVIDIA RTX 3090 GPUs with a total compute time of approximately 48 hs. ...using a network with 5 fully connected layers and relu non-linearities. ...The inpainting operators have a diagonal structure... with 4 acceleration, i.e., m/n = 0.25. ...we follow the strategy in [19], and choose to assign a random subset representing 60% of the measurements in Ag to Ag,1 and the remaining to Ag,2.