GDPP: Learning Diverse Generations using Determinantal Point Processes

Authors: Mohamed Elfeki, Camille Couprie, Morgane Riviere, Mohamed Elhoseiny

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our Generative DPP approach shows a consistent resistance to mode-collapse on a wide-variety of synthetic data and natural image datasets including MNIST, CIFAR10, and Celeb A, while outperforming stateof-the-art methods for data-efficiency, generation quality, and convergence-time whereas being 5.8x faster than its closest competitor.
Researcher Affiliation Collaboration 1University of Central Florida 2Facebook Artificial Intelligence Research 3King Abdullah University of Science and Technology. Correspondence to: Mohamed Elfeki <elfeki@cs.ucf.edu>, Couprie,Rivi ere <{coupriec,mriviere}@fb.com>, Mohamed Elhoseiny <mohamed.elhoseiny@kaust.edu.sa>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes 1https://github.com/M-Elfeki/GDPP
Open Datasets Yes Stacked-MNIST A variant of MNIST (Le Cun, 1998)...; CIFAR-10 We evaluate the methods on CIFAR-10...; Celeb A Finally, to evaluate the performance of our loss on large-scale adversarial training... Celeb A dataset (Liu et al., 2018)...
Dataset Splits No The paper mentions training and testing on datasets but does not explicitly describe a validation set or specify train/validation/test splits.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions using frameworks like DCGAN and VAE, but it does not specify version numbers for any software dependencies or libraries.
Experiment Setup Yes The same architecture is used for all methods and hyperparameters were tuned separately for each approach to achieve the best performance (See Appendix A for details). We train all models for 25K iterations...; We train all the models for 15,000 iterations...; We evaluate the methods on CIFAR-10 after training all the models for 100K iterations.