Fully Convolutional Network Based Skeletonization for Handwritten Chinese Characters

Authors: Tie-Qiang Wang, Cheng-Lin Liu

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on character images synthesized from the online handwritten dataset CASIA-OLHWDB and achieve higher accuracy of skeleton pixel detection than traditional thinning algorithms. We also conduct skeleton based character recognition experiments using convolutional neural network (CNN) classifiers on offline/online handwritten datasets, and obtained comparable accuracies with recognition on original character images.
Researcher Affiliation Academia Tie-Qiang Wang,1,2 Cheng-Lin Liu1,2 1National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, 95 Zhongguancun East Road, Beijing 100190, P.R. China 2University of Chinese Academy of Sciences, Beijing, P.R. China Email: {tieqiang.wang, liucl}@nlpr.ia.ac.cn
Pseudocode No No structured pseudocode or algorithm blocks found.
Open Source Code No No explicit statement or link for open-source code provided.
Open Datasets Yes We synthesize the pair-wise training data as (Fig. 6(a), Fig. 6(b)) from CASIA-OLHWDB1.1 ( 1.121 millions) for the training and test our models on the synthesized data ( 224 thousands) from ICDAR-2013 Online HCCR Competition Database (Yin et al. 2013). and We use the universal set of online Handwritten database CASIA-OLHWDB1.1 (Liu et al. 2011a) to synthesize training data ( 1.121 millions)
Dataset Splits No We test the pre-trained model on a random subset ( 230 thousands) of the training part of CASIA-OLHWDB1.0 (Liu et al. 2011a). This refers to the *pre-training* process and calls it a 'testing set', not a validation set for the FCN itself. The main experiments discuss training data and testing data, but no explicit validation split.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) are provided for running the experiments.
Software Dependencies No No specific software dependencies with version numbers are provided.
Experiment Setup Yes Each convolutional layer works with 3 3-kernels, 1-stride and 1-pad. We employ SGD with learning rate 10^-1, momentum 0.9, weight decay 2 10^-4, and the learning rate is dropped by 0.1 per 3.5 epochs. Here we omit the 2 2max-pooling and batch normalization layers after each convolutional group. (This is for the pre-trained model, which initializes their FCNs). Also, 3 different input sizes: 96 96, 64 64, and 48 48.