Multi-mapping Image-to-Image Translation via Learning Disentanglement
Authors: Xiaoming Yu, Yuanqi Chen, Shan Liu, Thomas Li, Ge Li
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our method outperforms state-of-the-art methods. We compare our approach against recent one-to-many mapping models in two tasks, including season transfer and semantic image synthesis. The quantitative results shown in Table 2 further confirm our observations above. It is remarkable that our method achieves the best FID score while greatly surpassing the multi-domain and multi-modal models in LPIPS distance. |
| Researcher Affiliation | Collaboration | Xiaoming Yu1,2, Yuanqi Chen1,2, Thomas Li1,3, Shan Liu4, and Ge Li 1,2 1School of Electronics and Computer Engineering, Peking University 2Peng Cheng Laboratory 3Advanced Institute of Information Technology, Peking University 4Tencent America xiaomingyu@pku.edu.cn, cyq373@pku.edu.cn tli@aiit.org.cn, shanl@tencent.com, geli@ece.pku.edu.cn |
| Pseudocode | No | The paper describes its method and learning strategy but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code will be available at https://github.com/Xiaoming-Yu/DMIT. |
| Open Datasets | Yes | Yosemite summer winter. The unpaired dataset is provided by Zhu et al. [45] for evaluating unsupervised I2I methods. CUB. The Caltech-UCSD Birds (CUB) [36] dataset contains 200 bird species with 11,788 images that each have 10 text captions [32]. |
| Dataset Splits | No | The paper mentions using a 'training set' and 'test image' for evaluation but does not specify the exact percentages or counts for training, validation, and test splits, nor does it explicitly mention a validation set. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow, CUDA versions) that would be needed to replicate the experiment environment. |
| Experiment Setup | No | The paper defines loss functions with coefficients (e.g., λKLE, λrec, λreg) but does not provide specific numerical values for these or other common experimental setup details such as batch size, learning rate, optimizer, or number of training epochs. |