Parallel Sampling of Diffusion Models

Authors: Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh, Nima Anari

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment with our method Para Di GMS on a suite of robotic control tasks [4] including Square [41], Push T, Franka Kitchen [7], and high-dimensional image generation models including Stable Diffusion-v2 [24] and LSUN Church and Bedroom [39].
Researcher Affiliation Academia Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh, Nima Anari Computer Science, Stanford University {andyshih,belkhale,ermon,dorsa,anari}@cs.stanford.edu
Pseudocode Yes In Algorithm 1 we present the complete procedure of Para Di GMS, incorporating sliding window over a batch, up-front sampling of noise, and tolerance of Picard iterations (Fig. 3).
Open Source Code Yes Code for our paper can be found at https://github.com/Andy Shih12/paradigms
Open Datasets Yes We experiment with our method Para Di GMS on a suite of robotic control tasks [4] including Square [41], Push T, Franka Kitchen [7], and high-dimensional image generation models including Stable Diffusion-v2 [24] and LSUN Church and Bedroom [39].
Dataset Splits No The paper mentions 'evaluation episodes' and total sample counts for metrics (e.g., '5000 samples' for FID score), but it does not specify explicit train/validation/test splits with percentages or sample counts for dataset partitioning.
Hardware Specification Yes on a single A40 GPU.
Software Dependencies No The paper mentions 'PyTorch' and 'Diffusers library' but does not specify version numbers for these software components or any other ancillary software.
Experiment Setup Yes Each environment uses a prediction horizon of 16, and replanning horizon 8. The DDPM scheduler in Diffusion Policy [4] uses 100 step discretization, and the DDIM/DPMSolver schedulers use 15 step discretization. We use tolerance 5e-1 for DDPM and 1e-3 for DDIM.