Provable Posterior Sampling with Denoising Oracles via Tilted Transport

Authors: Joan Bruna, Jiequn Han

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now validate our theoretical results by applying Algorithm 8 to the Gaussian mixture model in high dimensions, using LMC as the baseline algorithm. We perform four inverse tasks on the Flickr-Faces-HQ Dataset (FFHQ) [39] to demonstrate the application of the tilted transport technique on imaging data as a proof of concept.
Researcher Affiliation Collaboration Joan Bruna New York University & Flatiron Institute bruna@cims.nyu.edu Jiequn Han Flatiron Institute jhan@simonsfoundation.org
Pseudocode Yes Algorithm 1 Sampling Using Tilted Transport
Open Source Code Yes Code for Gaussian mixture is uploaded in a single zip file.
Open Datasets Yes We now validate our theoretical results by applying Algorithm 8 to the Gaussian mixture model in high dimensions, using LMC as the baseline algorithm. We perform four inverse tasks on the Flickr-Faces-HQ Dataset (FFHQ) [39] to demonstrate the application of the tilted transport technique on imaging data as a proof of concept.
Dataset Splits No The paper describes the datasets used (Gaussian mixture and FFHQ) but does not provide specific details on how they were split into training, validation, and test sets, beyond implying evaluation on generated samples.
Hardware Specification Yes Our experiments with the Gaussian mixture model require only a few seconds per run on a laptop.
Software Dependencies No The paper mentions using 'Black JAX [12]' and 'NVIDIA codebase [47]' but does not provide specific version numbers for these software dependencies or other libraries.
Experiment Setup Yes We examine three cases where 𝑑= 20, 40, and 80. In each scenario, we set 𝑑 = 𝑑, fix 𝜅= 20, and vary the SNR from 10 5 to 10 1. We use Black JAX [12] to implement the No-U-turn sampler. Our algorithm was implemented using the NVIDIA codebase [47] with 1000 diffusion steps for posterior sampling, and utilized the score function from a pretrained diffusion model [20]. Similar to our Gaussian mixture model experiments where we adjusted the timing for the boosted posterior to avoid the singularity of 𝑄𝑡, we shifted 6 10 timesteps for setting the boosted posterior.