Unifying GANs and Score-Based Diffusion as Generative Particle Models

Authors: Jean-Yves Franceschi, Mike Gartrell, Ludovic Dos Santos, Thibaut Issenhuth, Emmanuel de Bézenac, Mickael Chen, Alain Rakotomamonjy

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically test the viability of these original models as proofs of concepts of potential applications of our framework. We conduct experiments on the unconditional generation task for two standard datasets composed of images: MNIST (Le Cun et al., 1998) and 64 x 64 Celeb A (Liu et al., 2015). We consider two reference baselines, EDM (the scorebased diffusion model of Karras et al. (2022)) and GANs, and use the Fréchet Inception Distance (FID, Heusel et al., 2017) to test generative performance in Table 3.
Researcher Affiliation Collaboration Jean-Yves Franceschi Criteo AI Lab, Paris, France jycja.franceschi@criteo.com; Mike Gartrell Criteo AI Lab, Paris, France mike.gartrell@acm.org; Ludovic Dos Santos Criteo AI Lab, Paris, France l.dossantos@criteo.com; Thibaut Issenhuth Criteo AI Lab, Paris, France LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS, Marne-la-Vallée, France t.issenhuth@criteo.com; Emmanuel de Bézenac Seminar for Applied Mathematics, D-MATH, ETH Zürich, Rämistrasse 101, Zürich-8092, Switzerland emmanuel.debezenac@sam.math.ethz.ch; Mickaël Chen Valeo.ai, Paris, France mickael.chen@valeo.com; Alain Rakotomamonjy Criteo AI Lab, Paris, France a.rakotomamonjy@criteo.com
Pseudocode Yes Algorithm 1: Training iteration of Score GANs; Algorithm 2: Training iteration of Discr. Flows; Algorithm 3: Training iteration for Discriminator Flow (detailed).
Open Source Code Yes our open-source code is available at https: //github.com/White-Link/gpm.
Open Datasets Yes We conduct experiments on the unconditional generation task for two standard datasets composed of images: MNIST (Le Cun et al., 1998) and 64 x 64 Celeb A (Liu et al., 2015).
Dataset Splits Yes MNIST is comprised of a training and testing dataset, but no validation set; we create one for each model training by randomly selecting 10% of the training images.
Hardware Specification Yes For all experiments we use one or two Nvidia V100 GPUs with CUDA 11.8.
Software Dependencies Yes Our Python source code (tested on version 3.10.4), based on Py Torch (Paszke et al., 2019) (tested on version 1.13.1), is available as open source at https://github.com/White-Link/gpm.
Experiment Setup Yes We summarize the model hyperparameters used during training in Tables 4 to 7. See our code for more information. Table 4: Chosen hyperparameters for Discriminator Flows for each dataset.