Beyond Speech: Generalizing D-Vectors for Biometric Verification
Authors: Jacob Baldwin, Ryan Burnham, Andrew Meyer, Robert Dora, Robert Wright842-849
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present a comprehensive empirical analysis comparing our framework to the state-of-the-art in both domains. |
| Researcher Affiliation | Industry | Jacob Baldwin, Ryan Burnham, Andrew Meyer, Robert Dora, Robert Wright Assured Information Security, Inc. 153 Brooks Rd. Rome, NY 13441 {baldwinj, burnhamr, meyera, dorar, wrightr}@ainfosec.com |
| Pseudocode | No | The paper describes the model architectures and framework using diagrams (Figure 1, 2, 3) but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | Additionally, we use two publicly available datasets as benchmarks one for keystroke (Clarkson) (Murphy et al. 2017) and gait (UCI) (Anguita et al. 2013). |
| Dataset Splits | No | The paper states "To train the D-Vectors models, the subjects are randomly partitioned into 70% for training and 30% for testing." and "A training-test split of 70/30% of the subject data is performed on the Multimod dataset", but does not explicitly describe a separate validation dataset split. |
| Hardware Specification | No | The paper mentions "On a modern dual-CPU machine with GPU acceleration" but does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts. |
| Software Dependencies | No | The paper describes various models and methods but does not provide specific software dependencies, such as library names with version numbers, needed to replicate the experiments. |
| Experiment Setup | Yes | Dropout is applied aggressively, 75%, to this last layer to prevent over-fitting. and Finally, dropout is applied to each DNN layer, 50% on the first layer and 75% on the remaining two layers, to prevent over-fitting. |