Uncertainty Quantification via Neural Posterior Principal Components

Authors: Elias Nehme, Omer Yair, Tomer Michaeli

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We showcase our method on multiple inverse problems in imaging, including denoising, inpainting, super-resolution, colorization, and biological image-to-image translation. Our method reliably conveys instanceadaptive uncertainty directions, achieving uncertainty quantification comparable with posterior samplers while being orders of magnitude faster.
Researcher Affiliation Academia Elias Nehme Technion Israel Institute of Technology seliasne@campus.technion.ac.il Omer Yair Technion Israel Institute of Technology omeryair@campus.technion.ac.il Tomer Michaeli Technion Israel Institute of Technology tomer.m@ee.technion.ac.il
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code and examples are available on our webpage.
Open Datasets Yes Handwritten digits Figure 4 demonstrates NPPC on denoising and inpainting of handwritten digits from the MNIST dataset. Faces To test NPPC on faces, we trained on the Celeb A-HQ dataset using the original split inherited from celeb A [24]...Here, we applied NPPC to a dataset of migrating cells imaged live for 14h (1 picture every 10min) using a spinning-disk microscope [52]. The dataset consisted of 1753 image pairs of resolution 1024 1024, out of which 1748 were used for training, and 5 were used for testing following the original split by the authors.
Dataset Splits Yes resulting in 24183 images for training, 2993 images for validation, and 2824 images for testing.
Hardware Specification No The paper does not provide specific details on the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using 'variants of the U-Net architecture [11, 40]' but does not provide specific version numbers for any software dependencies (e.g., programming languages, libraries, frameworks).
Experiment Setup Yes Full details regarding the architectures, the scheduler, and the per-task setting of λ1, λ2 are in App. A.