SCFont: Structure-Guided Chinese Font Generation via Deep Stacked Networks

Authors: Yue Jiang, Zhouhui Lian, Yingmin Tang, Jianguo Xiao4015-4022

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results validate the superiority of the proposed SCFont compared to the state of the art in both visual and quantitative assessments.
Researcher Affiliation Academia Yue Jiang, Zhouhui Lian,* Yingmin Tang, Jianguo Xiao Institute of Computer Science and Technology, Peking University, Beijing, P.R. China {yue.jiang, lianzhouhui, tangyingmin, xiaojianguo}@pku.edu.cn
Pseudocode No The paper describes the system architecture and methods but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper provides GitHub links for external works (Rewrite, zi2zi) in its references, but there is no explicit statement or link indicating that the source code for the proposed SCFont method is publicly available.
Open Datasets Yes We select to conduct experiments on seventy Chinese font libraries in different handwriting styles as well as designing styles. Here, the trajectories of stokes in all character images have been attained by stroke extraction techniques and a few wrong extraction results have been manually corrected. We take the optimal input set (Opt Set) presented in (Lian, Zhao, and Xiao 2016) which contains 775 characters as the input set.
Dataset Splits No The paper mentions selecting '6000 characters randomly chosen from GB2312 charset for each font, to pretrain the entire network' and fine-tuning with '775 of writing samples', but it does not specify explicit train/validation/test dataset splits with percentages or counts.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using the Adam optimizer and CNN models, but it does not specify any software names with version numbers for libraries or programming languages used in the implementation.
Experiment Setup Yes In our experiment, the input and output character images are both of size 320 320 3. We use mini-batches with size 16 and train the model with the Adam optimizer. The learning rate is initialized as 0.001 and is decayed by a half after 5 iterations. In the Skel Net, λj = 2 j for j [0, 6] ; in the Style Net, λpix, λcon , and λad are set to 100, 15 and 1, respectively.