SynSig2Vec: Learning Representations from Synthetic Dynamic Signatures for Real-World Verification
Authors: Songxuan Lai, Lianwen Jin, Luojun Lin, Yecheng Zhu, Huiyun Mao735-742
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | One highlight of our method is that it requires neither skilled nor random forgeries for training, yet it surpasses the stateof-the-art by a large margin on two public benchmarks. |
| Researcher Affiliation | Academia | Songxuan Lai, Lianwen Jin, Luojun Lin, Yecheng Zhu, Huiyun Mao School of Electronic and Information Engineering, South China University of Technology eesxlai@foxmail.com, eelwjin@scut.edu.cn |
| Pseudocode | Yes | Algorithm 1 Network optimization for learning dynamic signature representations |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | Two benchmark dynamic signature datasets were used in this study, namely MCYT-100 (Ortega-Garcia et al. 2003) and SVC-Task2 (Yeung et al. 2004). |
| Dataset Splits | Yes | For MCYT-100, we used a 10-fold cross validation. The k-th fold corresponded to the k-th ten individuals, and we trained the models in a round-robin fashion on all but one of the folds. Skilled forgeries were not included in the training set. In the testing stage, we considered two scenarios, namely T5 and T1. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper mentions general algorithms and mathematical tools (e.g., 'stochastic gradient descent', 'Butterworth lowpass filter') but does not specify any software libraries, frameworks, or their version numbers used in the implementation. |
| Experiment Setup | Yes | For signature synthesis, each signature component is resampled at 200 Hz...a Butterworth lowpass filter with a cutoff frequency of 10 Hz is applied...resampled at 100 Hz...One-dimensional CNN with six convolution layers and scaled exponential linear units (SELUs), as shown in Table 2...We set |P1| = |P2| = 20, |G1| = 5 and |G2| = 10. Therefore, the batch size was 1 + |G1| + |G2| = 16. For AP optimization, we chose the positive direction and λ was set as 5 for MCYT-100 and 10 for SVC-Task2. ...The learning rate, momentum and weight decay were set as 0.001, 0.9 and 0.001, respectively. We trained for M 800 batches... |