Unpaired Multi-Domain Image Generation via Regularized Conditional GANs

Authors: Xudong Mao, Qing Li

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed model on several tasks for which paired training data is not given, including the generation of edges and photos, the generation of faces with different attributes, etc. The experimental results show that our model can successfully generate corresponding images for all these tasks, while outperforms the baseline methods.
Researcher Affiliation Academia Xudong Mao and Qing Li Department of Computer Science, City University of Hong Kong xudong.xdmao@gmail.com, itqli@cityu.edu.hk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our implementation is available at https://github.com/xudonmao/Reg CGAN.
Open Datasets Yes We first evaluate Reg CGAN on MNIST and USPS datasets. ... The Handbag [Zhu et al., 2016] and Shoe [Yu and Grauman, 2014] datasets are used for this task. ... We evaluate Reg CGAN on the Celeb A dataset [Liu et al., 2014]. ... Chairs [Aubry et al., 2014] and Cars [Fidler et al., 2012]. ... The NYU depth dataset [Silberman et al., 2012] is used... ... Monet-style dataset [Zhu et al., 2017]. ... Summer and Winter dataset [Zhu et al., 2017].
Dataset Splits No The paper mentions using grid search for hyperparameters but does not explicitly provide specific training, validation, and test dataset splits or a general splitting methodology that covers all experiments. For unsupervised domain adaptation, it mentions sampling 2,000 images from MNIST and 1,800 images from USPS, which are then evaluated on 'sampled set' and 'test set' but a distinct validation set split is not specified.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or specific computing platforms) used for running the experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., specific versions of deep learning frameworks like TensorFlow or PyTorch, or programming languages like Python).
Experiment Setup Yes We use Adam optimizer with the learning rates of 0.0005 for LSGAN and 0.0002 for standard GAN. For the hyperparameters in Equations 2 and 3, we set λ = 0.1, β = 0.004, and γ = 1.0 found by grid search. ...the generator consists of four transposed convolutional layers and the discriminator is a variant of Le Net [Lecun et al., 1998].