Aesthetic Visual Quality Evaluation of Chinese Handwritings

Authors: Rongju Sun, Zhouhui Lian, Yingmin Tang, Jianguo Xiao

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that the proposed AI system provides a comparable performance with human evaluation.
Researcher Affiliation Academia Institute of Computer Science and Technology, Peking University, Beijing, P.R.China
Pseudocode No The paper describes methods in text and uses diagrams, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions that the Chinese Handwriting Aesthetic Evaluation Database (CHAED) is publicly available on their website (http://www.icst.pku.edu.cn/zlian/chin-beauty-eval/), but it does not state that the source code for the methodology is available.
Open Datasets Yes We propose a relatively large-scale Chinese Handwriting Aesthetic Evaluation Database (CHAED), which is publicly available on our website1. 1http://www.icst.pku.edu.cn/zlian/chin-beauty-eval/
Dataset Splits No The paper states: 'For global features, half of the database is used for training and the other half for testing. To be specific, for each character, there are 5 handwriting samples for training and 5 for testing.' It does not mention a validation split.
Hardware Specification No The paper does not explicitly describe the hardware specifications (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using back-propagation neural networks and Support Vector Machines (SVM), along with training functions TRANGDM and LEARNGDM, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We build three 4-layer back-propagation neural networks denoted as Netg, Netc and Neto for global features, component layout features and hybrid features, respectively. ... The number of neurons for Netg, Netc and Neto are respectively (22, 20, 10, 3), (10, 15, 10, 3) and (32, 40, 20, 3). We determine the structure of the 3 neural networks by adjusting the training function, adaption learning function and the number of neurons in every layer to achieve the best evaluation results in the training dataset. Here, we choose TRANGDM as the training function and LEARNGDM as the adaption learning function for these 3 networks.