FET-GAN: Font and Effect Transfer via K-shot Adaptive Instance Normalization
Authors: Wei Li, Yongxing He, Yanwei Qi, Zejian Li, Yongchuan Tang1717-1724
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experimental validation and comparison, our model advances the state-of-the-art in the text effect transfer task. Besides, we have collected a font dataset including 100 fonts of more than 800 Chinese and English characters. Based on this dataset, we demonstrated the generalization ability of our model by the application that complements the font library automatically by few-shot samples. |
| Researcher Affiliation | Academia | Wei Li,1 Yongxing He,1 Yanwei Qi,1 Zejian Li,1 Yongchuan Tang1,2 1College of Computer Science, Zhejiang University, Hangzhou, 310027, China 2Zhejiang Lab, Hangzhou, 310027, China |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our dataset, pre-trained models and codes in Py Torch (Paszke et al. 2017) are available at https://liweileev.github. io/FET-GAN/. |
| Open Datasets | Yes | Text Effects Dataset: We use the text effects dataset proposed in (Yang et al. 2019). Fonts-100: We collect a new dataset including 100 fonts each with 775 Chinese characters, 52 English letters, and 10 Arabic numerals. There are a total of 83,700 images, each of which is 320 320 in size. ... Our dataset, pre-trained models and codes in Py Torch (Paszke et al. 2017) are available at https://liweileev.github. io/FET-GAN/. |
| Dataset Splits | No | The paper does not explicitly provide details about a validation dataset split, only training and testing phases for models. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'codes in Py Torch' but does not specify the version number of PyTorch or any other software dependencies. |
| Experiment Setup | Yes | For FET-GAN, we set λcode = 1, λtransfer = 10, λrec = 10 and λGAN = 1. We optimize the objective using Adam solver (Kingma and Ba 2014) with a batch size of 4. All networks are trained from scratch with a learning rate of 0.0002. ... train the FET-GAN model using K = 4 for 30 epochs... |