The probability flow ODE is provably fast
Authors: Sitan Chen, Sinho Chewi, Holden Lee, Yuanzhi Li, Jianfeng Lu, Adil Salim
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide preliminary numerical experiments in a toy example showing that DPUM can sample from a highly non log-concave distribution (see Appendix). The numerical experiments are not among our main contributions and are provided for illustration only. |
| Researcher Affiliation | Collaboration | Harvard University, sitan@seas.harvard.edu Institute for Advanced Study, schewi@ias.edu Johns Hopkins University, hlee283@jhu.edu Microsoft Research, yuanzhili@microsoft.com Duke University, jianfeng@math.duke.edu Microsoft Research, adilsalim@microsoft.com |
| Pseudocode | Yes | Algorithm 1: DPOM(T, hpred, hcorr, s) Algorithm 2: DPUM(T, hpred, hcorr, s) |
| Open Source Code | Yes | The Python code can be found in the Supplementary material. |
| Open Datasets | No | The paper uses a "mixture of five Gaussians in dimension 5" and samples "500 independent points... from a standard Gaussian" for a toy example. No specific link, DOI, repository name, or formal citation for public access to this generated data is provided, nor is it a named, established benchmark dataset. |
| Dataset Splits | No | The paper describes a toy example setup but does not specify explicit training, validation, and test dataset splits. The experimental setup only mentions starting points and iterations: "We start by sampling 500 independent points (in blue) from a standard Gaussian. Then, we run DPUM from the blue dots over 300 iterations and plot the two first coordinates of the dots at iterations 0, 100, 200 and 300." |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, memory) are provided for running the experiments. The paper only mentions it's a "low-dimensional toy example." |
| Software Dependencies | No | No specific software dependencies with version numbers are provided. The paper mentions "The Python code can be found in the Supplementary material" and "We use a closed form formula for the score along the forward process." |
| Experiment Setup | Yes | The step size of the predictor is 0.01 and the step size of the corrector is 0.001. The corrector consists in 3 steps of the underdamped Langevin algorithm. In the latter algorithm, we initialize the velocity as a centered Gaussian random variable with standard deviation 0.001 and set the parameter γ to 0.01. |