LabelForest: Non-Parametric Semi-Supervised Learning for Activity Recognition

Authors: Yuchao Ma, Hassan Ghasemzadeh4520-4527

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our thorough analysis on three human activity datasets demonstrate that Label Forest achieves a labeling accuracy of 90.1% in presence of a skewed label distribution in the seed data. Compared to self-training and other sequential learning algorithms, Label Forest achieves up to 56.9% and 175.3% improvement in the accuracy on balanced and unbalanced seed data, respectively.
Researcher Affiliation Academia Yuchao Ma, Hassan Ghasemzadeh School of Electrical Engineering & Computer Science Washington State University, Pullman WA, 99164, USA {yuchao.ma, hassan.ghasemzadeh}@wsu.edu
Pseudocode Yes Algorithm 1 Greedy Spanning Forest; Algorithm 2 Silhouette-based Filtering
Open Source Code Yes Software package of Label Forest and sample data are made publicly available at https://github.com/y-max/Label Forest.
Open Datasets Yes We conducted comprehensive analyses to evaluate the performance of Label Forest using three datasets, including (1) HART (Anguita et al. 2013; Reyes-Ortiz et al. 2016); (2) Smart Sock; (3) Phone (Stisen et al. 2015).
Dataset Splits No The paper mentions a 'separate test set' but does not specify explicit training, validation, or test split percentages, sample counts, or the methodology used for creating these splits (e.g., k-fold cross-validation, stratified split, random seed).
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU/GPU models, memory, or cloud instance specifications).
Software Dependencies No The paper mentions using the 'SVM algorithm' but does not specify any software libraries, frameworks, or their version numbers (e.g., 'scikit-learn 0.24', 'PyTorch 1.9') that were used for implementation.
Experiment Setup No The paper states that the 'SVM algorithm' was chosen but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, kernel type, regularization parameters for SVM), optimization settings, or training schedules.