Can Push-forward Generative Models Fit Multimodal Distributions?
Authors: Antoine Salmona, Valentin De Bortoli, Julie Delon, Agnes Desolneux
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 4, we illustrate these theoretical results on several experiments, showing the difficulties of GANs and VAEs to simulate multimodal distributions. We compare these models with SGMs and show experimentally that SGMs seem to be able to generate correctly multimodal distributions while keeping the Lipschitz constant of the score network relatively small, suggesting that these models do not suffer of such previously mentioned limitations. |
| Researcher Affiliation | Academia | Antoine Salmona Centre Borelli, ENS Paris Saclay, France Agnès Desolneux Centre Borelli, CNRS ENS Paris Saclay, France Julie Delon MAP5, Université Paris Cité, France Institut Universitaire de France (IUF) Valentin De Bortoli Center for Sciences of Data, CNRS ENS Ulm, France |
| Pseudocode | No | The paper describes methodologies and experiments in prose and mathematical formulations but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured code-like steps. |
| Open Source Code | No | The paper does not contain any statements about releasing code or links to source code repositories for the described methodology. |
| Open Datasets | Yes | Dhariwal and Nichol (2021) trained an unconditional Score-based Generative Model (SGM) (Song and Ermon, 2019; Ho et al., 2020) on Image Net (Russakovsky et al., 2015) and achieved state-of-the-art generation. We train a VAE, a GAN and a SGM on two datasets derived from MNIST (Le Cun et al., 1998): first, two images of two different digits (3 and 7) are chosen and 10000 noisy versions of theses images are drawn with a noise amount of σ = 0.15, forming a dataset of n = 20002 independent samples drawn from a balanced mixture of two Gaussian distributions in dimension 784 = 28 28. |
| Dataset Splits | No | The paper mentions using '50000 independent samples' for the univariate case and 'n = 20002 independent samples' for the MNIST-derived experiments, and training on the 'subset of all 3 and 7 of MNIST'. However, it does not provide specific train/validation/test dataset splits (e.g., percentages, sample counts, or citations to predefined splits) to reproduce the partitioning of data. |
| Hardware Specification | No | The paper describes the experimental setup in terms of models and datasets but does not provide any specific hardware details such as GPU or CPU models, memory specifications, or cloud computing instance types used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of various models (VAEs, GANs, SGMs) and techniques, but it does not list specific software dependencies with their version numbers (e.g., 'PyTorch 1.9' or 'TensorFlow 2.x') that are needed for reproducibility. |
| Experiment Setup | Yes | All details on the experiments and architecture of the networks can be found in Appendix S5. |