Learning Adaptive Hidden Layers for Mobile Gesture Recognition

Authors: Ting-Kuei Hu, Yen-Yu Lin, Pi-Cheng Hsiu

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed approach is evaluated on a benchmark for gesture recognition and a newly collected dataset. Superior performance demonstrates its effectiveness.
Researcher Affiliation Academia Ting-Kuei Hu, Yen-Yu Lin, Pi-Cheng Hsiu Research Center for Information Technology Innovation, Academia Sinica, Taiwan Email: {tkhu, yylin, pchsiu}@citi.sinica.edu.tw
Pseudocode No No explicit pseudocode or algorithm blocks are provided. The methodology is described through mathematical equations and textual explanations.
Open Source Code Yes The gesture dataset that we collected and the source codes of the proposed AHL will be available at http://cvlab.citi.sinica.edu.tw/Project Web/AHL/.
Open Datasets Yes The proposed AHL is evaluated on one benchmark dataset for gesture recognition, the Cha Learn LAP large-scale isolated gesture dataset (Iso GD) (Wan et al. 2016), and a new dataset collected by us... The gesture dataset that we collected and the source codes of the proposed AHL will be available at http://cvlab.citi.sinica.edu.tw/Project Web/AHL/.
Dataset Splits Yes This dataset is divided into the training subset, validation subset, and testing subset respectively. Due to the lack of the data labels in the testing subset, our approach and the competing approaches are trained on the training subset and evaluated on the validation subset.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, cloud instances) are provided for the experimental setup. The paper does not mention the type of machine or processor used.
Software Dependencies No This work is implemented based on the Theano library, and is evaluated on two different datasets... The version number for Theano is not specified.
Experiment Setup Yes On the Iso GD dataset, we follow the setting of the applied network (Zhu et al. 2017). The number of neurons in each group is 512 for the AHL applied to the last layer of each modality-specific sub-network, and it is 249 for AHL on the top of modality-specific sub-networks. The batch size and weight decay are set to 13 and 0.00004, respectively. The base learning rate is initialized to 0.1 and dropped by a factor of 2 every 5 epochs. At most 100 epochs are conducted.