Understanding Diffusion Models by Feynman’s Path Integral

Authors: Yuji Hirono, Akinori Tanaka, Kenji Fukushima

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply the Wentzel Kramers Brillouin (WKB) expansion...for evaluating the negative log-likelihood to assess the performance disparity between stochastic and deterministic sampling schemes. Based on the first order NLL expression, we quantify the merit of noise in the sampling process by computing the NLL as well as the 2-Wasserstein distance. ... 5. Experiments
Researcher Affiliation Academia 1Department of Physics, Kyoto University, Kyoto 6068502, Japan 2RIKEN AIP, RIKEN, Nihonbashi 103-0027, Japan 3RIKEN i THEMS, RIKEN, Wako 351-0198, Japan 4Department of Mathematics, Keio University, Hiyoshi 223-8522, Japan 5Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0039, Japan.
Pseudocode Yes Algorithm 1 1st-logq Solver
Open Source Code Yes The code to reproduce experiments here can be found at https://github.com/Akinori Tanaka-phys/ diffusion_path_integral.
Open Datasets Yes Swiss-roll data is generated by sklearn.datasets.make_swiss_roll (Pedregosa et al., 2011) with noise = 0.5, hole = False.
Dataset Splits No The paper mentions 'validation data' in Figure 4 but does not provide specific details on how this data was split (e.g., percentages, sample counts) from the synthetic datasets.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud computing instance specifications used for running the experiments.
Software Dependencies No We used JAX (Bradbury et al., 2018) and Flax (Heek et al., 2023) to implement our score-based models with neural networks, and Optax (Deep Mind et al., 2020) for the training. ... We use scipy.integrate.solve_ivp (Virtanen et al., 2020). The paper mentions software names and provides citations, but does not specify version numbers for JAX, Flax, Optax, or SciPy.
Experiment Setup Yes Adam-optimizer with learning rate 1e-3 and default values determined by Optax (Deep Mind et al., 2020). We train our models 16,000 epochs.