Reverse Transition Kernel: A Flexible Framework to Accelerate Diffusion Inference

Authors: Xunpeng Huang, Difan Zou, Hanze Dong, Zhang, Yian Ma, Tong Zhang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments support our theory. In this section, we conduct experiments when the target distribution p is a Mixture of Gaussian (Mo G) and compare RTK-based methods with traditional DDPM.
Researcher Affiliation Collaboration Xunpeng Huang HKUST xhuangck@connect.ust.hk Difan Zou HKU dzou@cs.hku.hk Hanze Dong Salesforce AI Research hanze.dong@salesforce.com Yi Zhang HKU yizhang101@connect.hku.hk Yian Ma UC San Diego yianma@ucsd.edu Tong Zhang UIUC tongzhang@tongzhang-ml.org
Pseudocode Yes Algorithm 1 INFERENCE WITH REVERSE TRANSITION KERNEL (RTK) Algorithm 2 MALA/PROJECTED MALA FOR RTK INFERENCE Algorithm 3 ULD FOR RTK INFERENCE
Open Source Code No The paper does not contain an explicit statement about releasing its source code or a link to a code repository.
Open Datasets Yes In this section, we conduct experiments when the target distribution p is a Mixture of Gaussian (Mo G) and compare RTK-based methods with traditional DDPM. Furthermore, we conducted experiments on the MNIST dataset, as shown in Figure 6.
Dataset Splits No The paper mentions training models but does not specify explicit training, validation, or test dataset splits.
Hardware Specification Yes The experiments are taken on a single NVIDIA Ge Force RTX 4090 GPU.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup Yes Specifically, while DDPM models xη across a sequence of η timesteps spanning from 0 to T in increments of 0.001 T (i.e., [0, 0.001T, 0.002T, . . . , T]), we execute Alg. 1, 2, and 3 at fewer timesteps within x[0,0.2T,0.4T,0.6T,0.8T ], and we distribute the NFE uniformly to these timesteps for MCMC.