Learning to Discover Cross-Domain Relations with Generative Adversarial Networks
Authors: Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, Jiwon Kim
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 3. Experiments To empirically demonstrate our explanations on the differences between a standard GAN, a GAN with reconstruction loss and our proposed model (Disco GAN), we designed an illustrative experiment based on synthetic data in 2-dimensional domains A and B. |
| Researcher Affiliation | Industry | 1SK T-Brain, Seoul, South Korea. Correspondence to: Taeksoo Kim <jazzsaxmafia@sktbrain.com>. |
| Pseudocode | No | The paper describes the model architecture and loss functions, but no structured pseudocode or algorithm blocks are provided. |
| Open Source Code | No | No statement regarding the release of source code or a link to a code repository was found in the paper. |
| Open Datasets | Yes | We used a Car dataset (Fidler et al., 2012)... Next, we use a Face dataset (Paysan et al., 2009)... We also applied the face attribute conversion task on Celeb A and Facescrub dataset (Liu et al., 2015; Ng & Winkler, 2014)... 3D rendered images of chair (Aubry et al., 2014)... generate realistic photos of handbags (Zhu et al., 2016) and shoes (Yu & Grauman, 2014). |
| Dataset Splits | No | The paper mentions splitting datasets into 'train set and test set' and further splitting the train set for domain A and B samples, but it does not specify a validation split or provide exact percentages or sample counts for any of the splits. |
| Hardware Specification | Yes | All computations were conducted on a single machine with an Nvidia Titan X Pascal GPU and an Intel(R) Xeon(R) E5-1620 CPU. |
| Software Dependencies | No | The paper mentions using the Adam optimizer and Batch Normalization, but it does not specify versions for these or any other software libraries or frameworks used. |
| Experiment Setup | Yes | In each real domain experiment, all input images and translated images were size of 64 64 3. For training, we employed learning rate of 0.0002 and used the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.5 and β2 = 0.999. We applied Batch Normalization (Ioffe & Szegedy, 2015) to all convolution and deconvolution layers except the first and the last layers, and applied weight decay regularization coefficient of 10 4 and minibatch of size 200. |