FontRL: Chinese Font Synthesis via Deep Reinforcement Learning

Authors: Yitian Liu, Zhouhui Lian2198-2206

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Both quantitative and qualitative experimental results demonstrate the superiority of the proposed Font RL compared to the state of the art.
Researcher Affiliation Academia Wangxuan Institute of Computer Technology, Peking University, Beijing, P.R. China {lsflyt, lianzhouhui}@pku.edu.cn
Pseudocode No The paper describes its methodology in text and through architectural diagrams (Figure 1 and Figure 2) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/lsflytpku/Font RL.
Open Datasets Yes In our experiments, we directly use the dataset introduced in (Jiang et al. 2019), which consists of glyph images of all 6763 Chinese characters and their manually-specified stroke skeletons in 5 different font styles as target and a mean font style as reference.
Dataset Splits No The paper mentions using 'an input character set proposed in (Lian, Zhao, and Xiao 2016) for training' consisting of '775 Chinese characters' but does not specify the train/validation/test splits or their proportions for the dataset.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running experiments, such as CPU or GPU models.
Software Dependencies No The paper mentions learning rates and hyperparameters but does not provide specific software dependencies with version numbers, such as programming languages or deep learning framework versions.
Experiment Setup Yes The learning rate of MPNet is initialized as 0.0003, and decayed to 0.0001 after 6000 iterations; the learning rate of BBox Net is initialized as 0.001, decayed to 0.0005 after 40 epochs, and then decayed to 0.0001 after 100 epochs; the hyper-parameters of Style Net are set to the default values.