Rethinking the Paradigm of Content Constraints in Unpaired Image-to-Image Translation
Authors: Xiuding Cai, Yaoyao Zhu, Dong Miao, Linjie Fu, Yu Yao
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on multiple datasets demonstrate the effectiveness and advantages of En Co, and we achieve multiple state-of-the-art compared to previous methods. |
| Researcher Affiliation | Academia | 1 Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu, China 2 University of Chinese Academic Sciences, Beijing, China |
| Pseudocode | No | The paper provides architectural diagrams and mathematical formulations for its method but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing the source code for their methodology or provide a link to a code repository. |
| Open Datasets | Yes | To demonstrate the superiority of our method, we trained and evaluated on three popular I2I benchmark datasets, including Cityscapes, Cat Dog, Horse Zebra. Cityscapes (Cordts et al. 2016)... Cat Dog comes from the AFHQ dataset (Choi et al. 2020)... Horse Zebra, collected by Cycle GAN from Image Net (Deng et al. 2009)... |
| Dataset Splits | No | The paper states that Cityscapes contains '2975 training images and 500 test images' but does not specify a separate validation split or explicit percentages for all datasets. No general methodology for splitting data (e.g., cross-validation or random seed) is provided. |
| Hardware Specification | No | The paper details experimental settings like optimizers, learning rates, and batch size, but it does not specify any hardware used for training or inference, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'Res Net-based generator' but does not provide specific version numbers for any software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used in the implementation. |
| Experiment Setup | Yes | We use Adam optimizer (Kingma and Ba 2014) with β1 = 0.5 and β2 = 0.999. For the City Scapes and Horse Zebra datasets, 400 epoches are trained, and 200 epoches are trained only for the Cat Dog dataset. Following TTUR (Heusel et al. 2017), we set unbalanced learning rates of 5e 5, 2e 4 and 5e 5 for the generator, discriminator and projection head, respectively. We start linearly decaying the learning rate halfway through the training with batch size of 1. |