Robust Conditional Generative Adversarial Networks

Authors: Grigorios G. Chrysos, Jean Kossaifi, Stefanos Zafeiriou

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally scrutinize the sensitivity of the hyper-parameters and evaluate our model in the face of intense noise. Moreover, thorough experimentation with both images from natural scenes and human faces is conducted in two different tasks. We compare our model with both the state-of-the-art c GAN and the recent method of Rick Chang et al. (2017). The experimental results demonstrate that Ro CGAN outperform the baseline by a large margin in all cases.
Researcher Affiliation Academia Grigorios G. Chrysos1, Jean Kossaifi1, Stefanos Zafeiriou1 1 Department of Computing, Imperial College London, UK
Pseudocode No The paper describes the model architecture and training process in textual descriptions and figures, but it does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code No The paper does not include any explicit statements about making the source code available, nor does it provide a link to a code repository.
Open Datasets Yes The 4, 900 samples of the VOC 2007 Challenge (Everingham et al., 2010) form the training set, while the 10, 000 samples of tiny Image Net (Deng et al., 2009) consist the testing set. In this experiment we utilize the MS-Celeb (Guo et al., 2016) as the training set (3, 4 million samples), and the whole Celeb-A (Liu et al., 2015) as the testing set (202, 500 samples).
Dataset Splits No The validation and selection of the hyper-parameters was done in a withheld set of images. (No specific details about the size or percentage of the validation set are provided.)
Hardware Specification No The paper mentions "a GPU" in the context of previous work but does not specify the particular GPU models, CPUs, or other hardware components used for their experiments.
Software Dependencies No The paper mentions the "ADAM optimizer" but does not specify a version number for it or any other software libraries or dependencies used in the implementation.
Experiment Setup Yes The values of the additional hyper-parameters are λl = 25, λae = 100 and λdecov = 1; the common hyper-parameters with the vanilla c GAN, e.g. λc, λπ, remain the same. We utilize the ADAM optimizer with a learning rate of 2 × 10−5 for all our experiments. The batch size is 128 for images of faces and 64 for the natural scenes.