Learning Generative Adversarial Networks from Multiple Data Sources

Authors: Trung Le, Quan Hoang, Hung Vu, Tu Dinh Nguyen, Hung Bui, Dinh Phung

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to demonstrate the merit of P2GAN in two applications: generating data with constraints and addressing the mode collapsing problem. We use CIFAR-10, STL-10, and Image Net datasets and compute Fréchet Inception Distance to evaluate P2GAN s effectiveness in addressing the mode collapsing problem.
Researcher Affiliation Collaboration 1Faculty of Infomation Technology, Monash University, Australia 2AI Research Lab, Trusting Social, Australia 3Google Deep Mind
Pseudocode No The paper describes the model's formulation and training process in text and mathematical equations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper states: 'The supplementary material for this paper can be found at the following url address1. 1https://app.box.com/v/p2gan-supp.' However, it does not explicitly state that the source code for the methodology is provided within this supplementary material or elsewhere.
Open Datasets Yes We use CIFAR-10, STL-10, and Image Net datasets... CIFAR-10 contains 50,000 32 32 training images of 10 classes... STL-10 contains about 100,000 96 96 images... Image Net is the largest and most diverse datasets with more than 1.2 million images from 1,000 classes.
Dataset Splits No The paper mentions using CIFAR-10, STL-10, and ImageNet datasets and specific training image counts (e.g., 'CIFAR-10 contains 50,000 32 32 training images'), but it does not explicitly provide the specific training/validation/test dataset splits (e.g., percentages or sample counts for each split) needed for reproduction.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions using the Adam optimizer and specific divergences (Jensen-Shannon, KL), but it does not provide specific version numbers for any software dependencies, frameworks, or libraries (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We use Adam optimizer with a batch size of 64. The learning rate and the first-order momentum are set at 0.0002 and 0.5, respectively. Regarding the pushing parameter α...we employed a gentle force of 0.01 for all experiment. We vary the total number of generators K in {1, 3, 5, 10, 15}... We add a new generator for every 15, 10, 5, 3 epochs... The learning process is terminated after 150 epochs for CIFAR-10, 100 epochs for STL-10, and 50 epochs for Image Net.