Texture Reformer: Towards Fast and Universal Interactive Texture Transfer
Authors: Zhizhong Wang, Lei Zhao, Haibo Chen, Ailin Li, Zhiwen Zuo, Wei Xing, Dongming Lu2624-2632
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on a variety of application scenarios demonstrate the effectiveness and superiority of our framework. And compared with the state-of-the-art interactive texture transfer algorithms, it not only achieves higher quality results but, more remarkably, also is 2-5 orders of magnitude faster. We apply our framework to many challenging interactive texture transfer tasks, and demonstrate its effectiveness and superiority through extensive comparisons with the state-of-the-art (SOTA) algorithms. In Table 1, we compare the running time with the competitors. |
| Researcher Affiliation | Academia | College of Computer Science and Technology, Zhejiang University {endywon, cszhl, cshbchen, liailin, zzwcs, wxing, ldm}@zju.edu.cn |
| Pseudocode | No | The paper describes procedures using numbered steps within paragraphs (e.g., in Section 3.1 for SGTW), but it does not present these as formal pseudocode blocks or clearly labeled algorithm sections. |
| Open Source Code | Yes | Code is available at https://github.com/EndyWon/Texture-Reformer. |
| Open Datasets | Yes | The decoders are trained on the Microsoft COCO dataset (Lin et al. 2014) |
| Dataset Splits | No | The paper mentions training on the Microsoft COCO dataset but does not specify any dataset splits (e.g., percentages or counts for training, validation, or testing data) needed for reproduction. |
| Hardware Specification | Yes | 1 Tested on a 3.3 GHz hexa-core CPU and a 6GB Nvidia 1060 GPU. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages beyond the general mentions in the text. |
| Experiment Setup | Yes | The hyperparameters that control the semantic-awareness (Eq. 4) in stage I and stage II are set to ω1 = ω2 = 50 (ω1 for stage I, ω2 for stage II. See supplementary material (SM) for their effects). |