Semi-Supervised Learning for Surface EMG-based Gesture Recognition

Authors: Yu Du, Yongkang Wong, Wenguang Jin, Wentao Wei, Yu Hu, Mohan Kankanhalli, Weidong Geng

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the Nina Pro, Capg Myo and cslhdemg datasets validate the efficacy of our proposed approach, especially when the labeled samples are very scarce.
Researcher Affiliation Academia 1College of Computer Science, Zhejiang University 2College of Information Science and Electronic Engineering, Zhejiang University 3Smart Systems Institute, National University of Singapore 4School of Computing, National University of Singapore
Pseudocode No The paper describes the model architecture and training process in text and with a diagram (Figure 2), but does not provide any structured pseudocode or algorithm blocks.
Open Source Code Yes The codes are available at http://zju-capg.org/ myo/semi.
Open Datasets Yes We evaluated our approach using three public datasets, namely Nina Pro dataset [Atzori et al., 2014] (sub-dataset 1), Capg Myo dataset [Geng et al., 2016], and csl-hdemg dataset [Amma et al., 2015].
Dataset Splits Yes Our Conv Net was trained with approximately two thirds of the trials of each subject and tested with the remaining one third. ... For each recording session, we performed a leave-one-out cross-validation, in which each of the 10 trials was used in turn as the test set and a Conv Net was trained by using the remaining 9 trials.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments, only mentions the devices used for data collection (e.g., Cyber Glove II).
Software Dependencies No The deep-learning framework is based on Mx Net [Chen et al., 2015]. The paper mentions 'Mx Net' but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup Yes In all the experiments, the Conv Net was trained using Stochastic Gradient Descent with a batch size of 1000, 28 epoch, and a weight decay of 0.0001. The learning rate started at 0.1 and was divided by 10 after the 16th and 24th epochs. ... we fixed the temporal distance between two randomly selected frames (δ) in task 2 to be 10 frames (Nina Pro DB1), 100 frames (Capg Myo), and 205 frames (csl-hdemg) (equivalent to 10 ms in each dataset). ... we fix the hyperparameters of the proposed method in all experiments, where = β = 1 and k = 512.