Few-shot Font Generation with Localized Style Representations and Factorization

Authors: Song Park, Sanghyuk Chun, Junbum Cha, Bado Lee, Hyunjung Shim2393-2402

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section shows the comparison results of LF-Font and previous methods in the Chinese few-shot font generation... We evaluate the visual quality of generated glyphs using various metrics. Quantitative evaluation. We evaluate the visual quality of the generated images by six models with eight reference glyphs per style. To avoid randomness by the reference selection, we repeat the experiments 50 times with different reference characters.
Researcher Affiliation Collaboration Song Park,1, Sanghyuk Chun,2, Junbum Cha,3 Bado Lee,3 Hyunjung Shim1, 1 School of Integrated Technology, Yonsei University 2 NAVER AI LAB 3 NAVER CLOVA
Pseudocode No The paper describes the methodology in text and mathematical equations, but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes The source code is available at https://github.com/clovaai/lffont.
Open Datasets No We collect public 482 Chinese fonts from the web. This sentence states they collected public fonts but does not provide specific access information (link, DOI, citation to a specific dataset source).
Dataset Splits No We sample 467 fonts corresponding to 19, 234 characters for training, and the remaining unseen 15 fonts are used for the evaluation.
Hardware Specification No NAVER Smart Machine Learning (NSML) (Kim et al. 2018) has been used for experiments. This statement mentions the platform but does not specify any hardware details like GPU or CPU models.
Software Dependencies No The paper mentions several components like 'Adam optimizer', 'VGG-16', 'Res Net50', 'Cut Mix augmentation', and 'Adam P optimizer', but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We set λL1 = 1.0 and λfeat = λcls = λrep = 0.1 throughout all the experiments. We optimize our model with Adam optimizer (Kingma and Ba 2015). For stable training, we first train the model without factorization modules as Eq (2). Here, the model is trained to generate a target glyph from the component-wise style features fs,u directly extracted from the reference set Xr. We construct a mini-batch with pairs of a reference set and a target glyph. After enough iterations, we add the factorization modules to the model and jointly train all modules.