DeepCalliFont: Few-Shot Chinese Calligraphy Font Synthesis by Integrating Dual-Modality Generative Models

Authors: Yitian Liu, Zhouhui Lian

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Both qualitative and quantitative experiments have been conducted to demonstrate the superiority of our method to other state-of-the-art approaches in the task of few-shot Chinese calligraphy font synthesis.
Researcher Affiliation Academia Wangxuan Institute of Computer Technology, Peking University, Beijing, P.R. China {lsflyt, lianzhouhui}@pku.edu.cn
Pseudocode No The paper includes diagrams and mathematical formulas, but no structured pseudocode or algorithm blocks labeled as such.
Open Source Code Yes The source code can be found at https://github.com/lsflytpku/Deep Calli Font.
Open Datasets Yes We use 251 fonts and CASIA Online Chinese Handwriting Databases (Liu et al. 2011) to pre-train two branches separately in the pre-training phase 1. Then, in the pretraining phase 2, we selected 42 fonts used in (Jiang et al. 2019), each of which consists of 3,000 glyph images and their corresponding writing trajectories, to train the whole model. ... We use 30 fonts in the dataset collected by Liu and Lian (Liu and Lian 2023) as the regular font test set.
Dataset Splits No The paper states, 'we fine-tune networks on 100 sample characters and test them on other 6,663 Chinese characters.' While it defines training and testing data, it does not explicitly mention a separate validation set or its split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software details with version numbers (e.g., library names with versions like Python 3.8, PyTorch 1.9) needed to replicate the experiment.
Experiment Setup Yes In this paper, θ and w are chosen as 100 and 2, respectively.