cGANs with Projection Discriminator

Authors: Takeru Miyato, Masanori Koyama

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In order to evaluate the effectiveness of our newly proposed architecture for the discriminator, we conducted two sets of experiments: class conditional image generation and super-resolution on ILSVRC2012 (Image Net) dataset (Russakovsky et al., 2015).
Researcher Affiliation Collaboration Takeru Miyato1, Masanori Koyama2 miyato@preferred.jp koyama.masanori@gmail.com 1Preferred Networks, Inc. 2Ritsumeikan University
Pseudocode No No. The paper provides architectural diagrams and mathematical formulations but does not include any pseudocode or algorithm blocks.
Open Source Code Yes The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.
Open Datasets Yes In order to evaluate the effectiveness of our newly proposed architecture for the discriminator, we conducted two sets of experiments: class conditional image generation and super-resolution on ILSVRC2012 (Image Net) dataset (Russakovsky et al., 2015).
Dataset Splits Yes We checked MS-SSIM (Wang et al., 2003) and the classification accuracy of the inception model on the generated images using the validation set of the ILSVRC2012 dataset.
Hardware Specification No No. The paper discusses the models trained and the number of iterations, but it does not provide specific details about the hardware (e.g., GPU models, CPU types) used for the experiments.
Software Dependencies Yes The code with Chainer (Tokui et al., 2015)... For all experiments, we used Adam optimizer (Kingma & Ba, 2015) with hyper-parameters set to α = 0.0002, β1 = 0, β2 = 0.9.
Experiment Setup Yes For all experiments, we used Adam optimizer (Kingma & Ba, 2015) with hyper-parameters set to α = 0.0002, β1 = 0, β2 = 0.9. We updated the discriminator five times per each update of the generator. For each method, we updated the generator 450K times, and applied linear decay for the learning rate after 400K iterations so that the rate would be 0 at the end.