DeepI2I: Enabling Deep Hierarchical Image-to-Image Translation by Transferring from GANs
Authors: yaxing wang, Lu Yu, Joost van de Weijer
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On many-class image-to-image translation on three datasets (Animal faces, Birds, and Foods) we decrease m FID by at least 35% when compared to the state-of-the-art. Furthermore, we qualitatively and quantitatively demonstrate that transfer learning significantly improves the performance of I2I systems, especially for small datasets. |
| Researcher Affiliation | Collaboration | Yaxing Wang, Lu Yu, Joost van de Weijer Computer Vision Center, Universitat Autònoma de Barcelona {yaxing, lu, joost}@cvc.uab.es ... We acknowledge the support from Huawei Kirin Solution. |
| Pseudocode | No | The paper describes its approach and training losses in text and mathematical formulas but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and models are made public at: https://github.com/yaxingwang/Deep I2I. |
| Open Datasets | Yes | We present our results on four datasets, namely Animal faces [38], Birds [63], Foods [31] and cat2dog [34]. |
| Dataset Splits | No | We resized all images to 128 128, and split each data into training set (90 %) and test set (10 %). |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies (e.g., Python, PyTorch, TensorFlow, or other libraries/solvers). |
| Experiment Setup | Yes | The training details for all models are in included Suppl. Mat. Sec. A. |