Learning to Write Stylized Chinese Characters by Reading a Handful of Examples
Authors: Danyang Sun, Tongzheng Ren, Chongxuan Li, Hang Su, Jun Zhu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our method has a powerful one-shot/few-shot generalization ability by inferring the style representation, which is the first attempt to learn to write new-style Chinese characters by observing only one or a few examples. |
| Researcher Affiliation | Academia | Department of Computer Science and Technology, Tsinghua Lab of Brain and Intelligence State Key Lab for Intell. Tech & Sys., BNRist Lab, Tsinghua University, 100084, China {sundy16, rtz14, lcx14}@mails.tsinghua.edu.cn; {suhangss, dcszj}@tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1: Training Algorithm |
| Open Source Code | No | The paper does not provide concrete access to its own open-source code for the methodology described. |
| Open Datasets | No | As our main purpose is to generalize over new styles based on learning sufficient support styles, we need to get enough styles for our model to learn. As no existing datasets satisfy our goal, we build a new one. The dataset consists of Chinese characters in 200 styles collected from the Internet. |
| Dataset Splits | No | We randomly select 80% of these styles (i.e., 160 styles) to be used in the style-bank for our training and the rest 20% (i.e., 40 styles) to be used for test. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | Our project is built on Zhu Suan [Shi et al., 2017], which is a deep probabilistic programming library based on Tensorflow. |
| Experiment Setup | No | The paper describes the model architecture and training algorithm but does not provide specific hyperparameter values or detailed system-level training settings. |