TET-GAN: Text Effects Transfer via Stylization and Destylization
Authors: Shuai Yang, Jiaying Liu, Wenjing Wang, Zongming Guo1238-1245
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that the disentangled feature representations enable us to transfer or remove all these styles on arbitrary glyphs using one network. Furthermore, the flexible network design empowers TET-GAN to efficiently extend to a new text style via oneshot learning where only one example is required. We demonstrate the superiority of the proposed method in generating high-quality stylized text over the state-of-the-art methods. |
| Researcher Affiliation | Academia | Shuai Yang, Jiaying Liu, Wenjing Wang, Zongming Guo Institute of Computer Science and Technology, Peking University, Beijing, China {williamyang, liujiaying, daooshee, guozongming}@pku.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any links to open-source code or state that code is available. |
| Open Datasets | No | We propose a new text effects dataset with as much as 64 professionally designed styles on 837 characters. We propose a new dataset including 64 text effects each with 775 Chinese characters, 52 English letters and 10 Arabic numerals, where the first 708 Chinese characters are for training and others for testing. The paper describes the creation of a new dataset but does not provide specific access information (link, DOI, repository) for it to be publicly available. |
| Dataset Splits | No | The paper states that characters are split for training and testing ("where the first 708 Chinese characters are for training and others for testing") but does not explicitly mention a separate validation split or its size/methodology. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and adapting 'pix2pix-c GAN' and 'UNet' architectures but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | Adam optimizer is applied with a fixed learning rate of 0.0002 and a batch size of 32, 16 and 8 for image size of 64 × 64, 128 × 128 and 256 × 256, respectively. For all experiments, we set λdfeat = λdpix = λspix = λrec = λsrec = 100, λgp = 10, and λdadv = λsadv = 1. |