Poisson Flow Generative Models
Authors: Yilun Xu, Ziming Liu, Max Tegmark, Tommi Jaakkola
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, PFGM achieves current state-of-the-art performance among the normalizing flow models on CIFAR-10, with an Inception score of 9.68 and a FID score of 2.35. |
| Researcher Affiliation | Academia | Massachusetts Institute of Technology {ylxu, zmliu, tegmark}@mit.edu; tommi@csail.mit.edu |
| Pseudocode | Yes | Algorithm 1: Learning the normalized Poisson Field |
| Open Source Code | Yes | The code is available at https: //github.com/Newbeeer/poisson_flow. |
| Open Datasets | Yes | For image generation tasks, we consider the CIFAR-10 [22], Celeb A 64 64 [38] and LSUN bedroom 256 256 [39]. |
| Dataset Splits | Yes | We follow the training procedure in [33] and split the training data into 99% training and 1% validation sets for model selection. |
| Hardware Specification | Yes | All the experiments are run on a single NVIDIA A100 GPU. |
| Software Dependencies | No | The paper mentions 'Scipy library [37] with the RK45 [7] method' but does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | We choose M = 291 (CIFAR-10 and Celeb A) 356 (LSUN bedroom), σ = 0.01 and = 0.03 for the perturbation Algorithm 2, and zmin = 1e 3, zmax = 40 (CIFAR-10) 60 (Celeb A 642) 100 (LSUN bedroom) for the backward ODE. We further clip the norms of initial samples into (0,3000) for CIFAR-10, (0,6000) for Celeb A 642 and (0,30000) for LSUN bedroom. |