Learning disconnected manifolds: a no GAN’s land

Authors: Ugo Tanielian, Thibaut Issenhuth, Elvis Dohmatob, Jeremie Mary

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the following, we show that our truncation method, JBT, can significantly improve the performances of generative models on several models, metrics and datasets. Furthermore, we compare JBT with over-parametrization techniques specifically designed for disconnected manifold learning. We show that our truncation method reaches or surpasses their performance, while it has the benefit of not modifying the training process of GANs nor using a mixture of generators, which is computationally expensive. Finally, we confirm the efficiency of our method by applying it on top of Big GAN (Brock et al., 2019).
Researcher Affiliation Collaboration Ugo Tanielian 1 2 Thibaut Issenhuth 2 Elvis Dohmatob 2 Jérémie Mary 2 1Université Paris-Sorbonne, Paris, France 2Criteo AI Lab, France.
Pseudocode No The paper describes its methods verbally and mathematically but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing their source code for the described methodology or a link to a repository.
Open Datasets Yes We further study JBT on three different datasets: MNIST (Le Cun et al., 1998), Fashion MNIST (Xiao et al., 2017) and CIFAR10 (Krizhevsky et al., 2009).
Dataset Splits No While the paper mentions using standard datasets, it does not provide specific details on the training, validation, or test split percentages or counts within the provided text.
Hardware Specification No The paper does not specify any hardware details (e.g., exact GPU/CPU models, memory amounts) used for running the experiments.
Software Dependencies No The paper mentions using "Wasserstein GAN with gradient penalty" and "Big GAN" but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes In practice, σ in [1e 4; 1e 2] and N = 10 give consistent results. Except for Big GAN, for all our experiments, we use Wasserstein GAN with gradient penalty (Gulrajani et al., 2017), called WGAN for conciseness.