Towards Discriminant Analysis Classifiers Using Online Active Learning via Myoelectric Interfaces

Authors: Andres G Jaramillo-Yanez, Marco E. Benalcázar, Sebastian Sardina, Fabio Zambetta6996-7004

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We then provide experimental evidence that our approach improves the performance of DA classifiers and is robust to mislabeled data, and that our soft-labeling technique has better performance than existing state-of-the-art methods.
Researcher Affiliation Academia 1 School of Computing Technologies, RMIT University, Melbourne, Australia 2 Artificial Intelligence and Computer Vision Research Lab, Escuela Polit ecnica Nacional, Quito, Ecuador
Pseudocode No The paper describes algorithms using mathematical equations and textual explanations (e.g., Definition 3 and 5), but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes We make publicly available the code1 (including all suplementary material) of this empirical evaluation to easily compare our approach with future approaches in this field. 1https://github.com/andresjarami/Online DAclassifier
Open Datasets Yes We use five publicly available datasets that contain hand gestures s EMG data. We use one gesture per class for training and the other gestures for updating and testing of the proposed model according to the description in Table 1
Dataset Splits No The paper defines splits for 'train', 'update', and 'test' gestures in Table 1. The 'update' gestures are used for the online active learning process, not a distinct validation set for hyperparameter tuning or model selection in the traditional sense. Thus, a separate validation split is not explicitly mentioned.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, or cloud computing instances) used to run the experiments.
Software Dependencies No The paper does not specify any software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow, specific libraries) that would be needed to reproduce the experiments.
Experiment Setup Yes In this experiment, we determine the best parameters λ and τ (shown in Appendix B) for each dataset and feature set from the set {0, 0.1, ..., 1} using grid search optimization. For the five DA classifiers, Figure 2 shows the average classification accuracy of the users in the five datasets using the three feature sets described above. To determine if the accuracy differences between the methods tested are statistically significant, we use the 2-tailed Wilcoxon signed ranks test at pvalue < 0.5 (Demšar 2006). We use black arrows to indicate that the accuracy come from the same distribution (there is no statistical difference).