TiGAN: Text-Based Interactive Image Generation and Manipulation
Authors: Yufan Zhou, Ruiyi Zhang, Jiuxiang Gu, Chris Tensmeyer, Tong Yu, Changyou Chen, Jinhui Xu, Tong Sun3580-3588
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on several datasets show that Ti GAN improves both interaction efficiency and image quality while better avoids undesirable image manipulation during interactions. We conduct extensive experiments on three different datasets: UT Zappos50k (Yu and Grauman 2014), MSCOCO 2014 (Lin et al. 2014) and Multi-modal Celeb A-HQ (Xia et al. 2021). The experiments are implemented under two settings: single-round image generation and interactive (multi-round) image generation. All experiments are conducted on 4 Nvidia Tesla V100 GPU and implemented with Pytorch. |
| Researcher Affiliation | Collaboration | Yufan Zhou1* , Ruiyi Zhang2 , Jiuxiang Gu2, Chris Tensmeyer2, Tong Yu2, Changyou Chen1, Jinhui Xu1, Tong Sun2 1State University of New York at Buffalo 2Adobe Research {yufanzho, changyou, jinhui}@buffalo.edu {ruizhang, jigu, tensmeye, tyu,tsun}@adobe.com |
| Pseudocode | No | The paper describes its methods using mathematical equations and descriptive text but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing open-source code for its methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We conduct extensive experiments on three different datasets: UT Zappos50k (Yu and Grauman 2014), MSCOCO 2014 (Lin et al. 2014) and Multi-modal Celeb A-HQ (Xia et al. 2021). |
| Dataset Splits | No | The paper mentions using a "test set" for evaluation and refers to an Appendix for "Details of the datasets, the experimental setup and hyper-parameters", but it does not explicitly provide training, validation, or test dataset split percentages or counts within the main body. |
| Hardware Specification | Yes | All experiments are conducted on 4 Nvidia Tesla V100 GPU and implemented with Pytorch. |
| Software Dependencies | No | The paper mentions that the experiments are "implemented with Pytorch" but does not provide specific version numbers for Pytorch or any other software dependencies. |
| Experiment Setup | No | The paper states: "Details of the datasets, the experimental setup and hyper-parameters are provided in the Appendix." However, the main text does not include specific hyperparameter values or detailed training configurations. |