DeepWriteSYN: On-Line Handwriting Synthesis via Deep Short-Term Representations

Authors: Ruben Tolosana, Paula Delgado-Santos, Andres Perez-Uribe, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales600-608

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental These two cases are developed experimentally for individual digits and handwriting signatures, respectively, achieving in both cases remarkable results. Also, we provide experimental results for the task of on-line signature verification showing the high potential of Deep Write SYN to improve significantly one-shot learning scenarios.
Researcher Affiliation Academia 1Biometrics and Data Pattern Analytics Lab, Universidad Autonoma de Madrid 2University of Applied Sciences Western Switzerland (HEIG-VD)
Pseudocode No No pseudocode or algorithm blocks are explicitly presented. The methodology is described using text and block diagrams, not formal pseudocode or algorithm listings.
Open Source Code No The source code is publicly available in Git Hub2. 2https://github.com/magenta/magenta-js/tree/master/sketch (Explanation: This statement refers to the publicly available code for Sketch-RNN, which Deep Write SYN is based on, but not to the specific implementation code for Deep Write SYN itself, which includes custom elements like the temporal segmentation or specific training configurations for handwriting.)
Open Datasets Yes Deep Write SYN is trained from scratch using handwritten digits of the public e Bio Digit DB database3 (Tolosana, Vera Rodriguez, and Fierrez 2019). [...] 3https://github.com/Bi DAlab/e Bio Digit DB and Deep Write SYN is trained from scratch in this case using signatures from the public Deep Sign DB4 (Tolosana et al. 2020b). [...] 4https://github.com/Bi DAlab/Deep Sign DB
Dataset Splits No The development dataset is considered in the training process, using 6,200 total samples (620 samples per digit). Finally, after training, we consider the unseen subjects included in the evaluation. This evaluation dataset comprises 1,200 total samples (120 samples per digit). (Explanation: The paper describes development and evaluation splits, where the development set is used for training and the evaluation set for testing, but it does not specify a distinct validation set or its split details for hyperparameter tuning during training.)
Hardware Specification No No specific hardware details (e.g., GPU, CPU, memory) used for running the experiments are provided. (Explanation: The paper mentions the device used for data acquisition (Samsung Galaxy Note 10.1) but not the computational hardware used for training or evaluation of the models.)
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch, or other libraries with their versions) are mentioned for their implementation. (Explanation: While the paper refers to the Sketch-RNN architecture and an Adam optimizer, it does not list specific software versions or libraries used in their experimental setup.)
Experiment Setup Yes Regarding the number of memory blocks, 512 are used in the encoder and 2,048 in the decoder. For the GMM, M = 20 mixture components. The size of the latent feature vector Nz is 128. During training, layer normalization and recurrent dropout with a probability of 90% are considered. Adam optimiser is considered with default parameters (learning rate of 0.0001).