Rethinking Generative Mode Coverage: A Pointwise Guaranteed Approach

Authors: Peilin Zhong, Yuchen Mo, Chang Xiao, Pengyu Chen, Changxi Zheng

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now present our major experimental results, while referring to Appendix F for network details and more results. We show that our mixture of generators is able to cover all the modes in various synthetic and real datasets, while existing methods always have some modes missed.
Researcher Affiliation Academia Columbia University {peilin, chang, cxz}@cs.columbia.edu {yuchen.mo, pengyu.chen}@columbia.edu
Pseudocode Yes Algorithm 1 Constructing a mixture of generators
Open Source Code No The paper does not provide an explicit statement about the release of its source code or a link to a code repository.
Open Datasets Yes This dataset consists of the entire training dataset of Fashion MNIST (with 60k images) mixed with randomly sampled 100 MNIST images labeled as 1 . ... [40] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. ar Xiv preprint ar Xiv:1708.07747, 2017. [41] Yann Le Cun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278 2324, 1998.
Dataset Splits No The paper mentions using 'the entire training dataset of Fashion MNIST' but does not specify a distinct training, validation, or test split for its own generative model experiments. It does not provide percentages, sample counts, or refer to predefined splits for its model's training process.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments, such as specific GPU or CPU models.
Software Dependencies No The paper does not provide a reproducible description of ancillary software with specific version numbers, such as Python versions or library versions.
Experiment Setup No The paper states, 'The size of generator mixture is always set to be 30 for Ada GAN, MGAN and our method, and all generators share the same network structure.' While this provides some configuration details, it does not include specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) for training their model.