Coupled Generative Adversarial Networks
Authors: Ming-Yu Liu, Oncel Tuzel
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply Co GAN to several joint image distribution learning tasks. Through convincing visualization results and quantitative evaluations, we verify its effectiveness. In the experiments, we emphasized there were no corresponding images in the different domains in the training sets. |
| Researcher Affiliation | Industry | Ming-Yu Liu Mitsubishi Electric Research Labs (MERL), mliu@merl.com Oncel Tuzel Mitsubishi Electric Research Labs (MERL), oncel@merl.com |
| Pseudocode | No | The paper describes the learning algorithm and model details in text and equations, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | An implementation of Co GAN is available in https://github.com/mingyuliutw/cogan. |
| Open Datasets | Yes | We used the MNIST training set to train Co GANs for the following two tasks. We used the Celeb Faces Attributes dataset [14] for the experiments. We used the RGBD dataset [15] and the NYU dataset [16] for learning joint distribution of color and depth images. |
| Dataset Splits | No | The learning hyperparameters were determined via a validation set. This statement is present, but lacks specific details such as split percentages or sample counts needed for reproducibility. |
| Hardware Specification | No | The paper does not explicitly mention any specific hardware details such as GPU/CPU models or other system specifications used for experiments. |
| Software Dependencies | No | The paper mentions algorithms like ADAM, but does not provide specific software dependencies or library versions (e.g., TensorFlow, PyTorch, scikit-learn) with their version numbers. |
| Experiment Setup | Yes | We used the ADAM algorithm [11] for training and set the learning rate to 0.0002, the 1st momentum parameter to 0.5, and the 2nd momentum parameter to 0.999 as suggested in [12]. The mini-batch size was 128. We trained the Co GAN for 25000 iterations. These hyperparameters were fixed for all the visualization experiments. |