Particle Denoising Diffusion Sampler
Authors: Angus Phillips, Hai-Dang Dau, Michael John Hutchinson, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate PDDS on multimodal and high dimensional sampling tasks. ... We evaluate the quality of normalizing constant estimates produced by PDDS on a variety of sampling tasks. ... We compare our performance to a selection of strong baselines. ... We present the normalizing constant estimation results in Figure 2. |
| Researcher Affiliation | Academia | 1University of Oxford 2CNRS, ENS Ulm. Correspondence to: Angus Phillips <angus.phillips@stats.ox.ac.uk>. |
| Pseudocode | Yes | Algorithm 1 Particle Denoising Diffusion Sampler ... Algorithm 2 Potential neural network training ... Algorithm 3 Particle Denoising Diffusion Sampler with Adaptive Resampling ... Algorithm 4 Generic SMC algorithm ... Algorithm 5 Generic sorted stratified resampling |
| Open Source Code | Yes | Our method was implemented in Python using the libraries of JAX (Bradbury et al., 2018), Haiku and Optax. Our implementation is available on Github1. (Footnote 1: https://github.com/angusphillips/particle_denoising_diffusion_sampler) |
| Open Datasets | Yes | The synthetic targets are Mixture, a 2-dimensional mixture of Gaussian distributions with separated modes (Arbel et al., 2021) and Funnel, a 10-dimensional target displaying challenging variance structure (Neal, 2003). The posterior distributions are Sonar, a logistic regression posterior fitted to the Sonar (61-dimensional) dataset and LGCP, a log Gaussian Cox process (Møller et al., 1998) modelling the rate parameter of a Poisson point process on a 40 × 40 = 1600-point grid, fitted to the Pines dataset. |
| Dataset Splits | No | The paper describes model training details, such as '10,000 optimisation steps' for variational approximation and '10,000 iterations of the Adam optimizer' for network training. However, it does not explicitly provide details about train/validation/test splits for the datasets themselves. |
| Hardware Specification | Yes | The number of trainable parameters for each method and task can be found in Table 1, along with training time and sampling time (performed on a NVIDIA GeForce GTX 1080 Ti GPU). |
| Software Dependencies | No | Our method was implemented in Python using the libraries of JAX (Bradbury et al., 2018), Haiku and Optax. While the libraries are named, specific version numbers for Python, JAX, Haiku, or Optax are not provided. |
| Experiment Setup | Yes | In all experiments we used 2000 particles to estimate the normalizing constant. ... We train for 10,000 iterations of the Adam optimizer ... with batch size 300 and a learning rate of 1e-3, which decays exponentially at a rate of 0.95 every 50 iterations ... PDDS-MCMC used 10 Metropolis-adjusted Langevin MCMC steps with step sizes tuned based on initial runs with the initial simple approximation, targeting an acceptance rate of approximately 0.6. |