Solving Inverse Problems with a Flow-based Noise Model

Authors: Jay Whang, Qi Lei, Alex Dimakis

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate the efficacy of our method on various inverse problems, including compressed sensing with quantized measurements and denoising with highly structured noise patterns. We also present initial theoretical recovery guarantees for solving inverse problems with a flow prior.
Researcher Affiliation Academia Jay Whang 1 Qi Lei 2 Alexandros G. Dimakis 3 1Dept. of Computer Science, UT Austin, TX, USA 2Dept. of Electrical and Computer Engineering, Princeton University, NJ, USA 3Dept. of Electrical and Computer Engineering, UT Austin, TX, USA.
Pseudocode No The paper does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the methodology, nor does it provide a link to a code repository.
Open Datasets Yes We trained multi-scale Real NVP models on two image datasets MNIST and Celeb A-HQ (Le Cun et al., 1998; Liu et al., 2015).
Dataset Splits No The paper mentions running experiments on the 'test set' and refers to 'trained models' implying training data. However, it does not explicitly specify the proportions or sizes of training, validation, or test splits (e.g., '80/10/10 split', '70% training, 15% validation, 15% test') or reference predefined splits for reproducibility beyond stating the test set size.
Hardware Specification No The paper mentions 'computing resources from TACC' in the Acknowledgements but does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) that would be needed to replicate the experiments.
Experiment Setup Yes To remedy this, we use a smoothed version of the model density p G(x)β where β 0 is the smoothing parameter... Thus the loss we minimize becomes LMAP(z; y, β) = log p (y f(G(z))) β log p G(G(z)) (10)