Novel Motion Patterns Matter for Practical Skeleton-Based Action Recognition

Authors: Mengyuan Liu, Fanyang Meng, Chen Chen, Songtao Wu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on our newly collected dataset verify that Mask-GCN outperforms most GCN-based methods when testing with various novel motion patterns.
Researcher Affiliation Collaboration Mengyuan Liu1*, Fanyang Meng2, Chen Chen3, Songtao Wu4 1 Key Laboratory of Machine Perception, Peking University, Shenzhen Graduate School 2 Peng Cheng Laboratory 3 University of Central Florida 4 Sony R&D Center China
Pseudocode No The paper describes methods with formulas and block diagrams (Fig. 2, 3), but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide an explicit statement or link for open-source code release.
Open Datasets No To evaluate our method, we use the pipeline shown in Fig. 4 to collect a new dataset. There are 21780 3D skeleton sequences in our dataset.
Dataset Splits Yes Four types of evaluation protocols are performed, i.e., cross-subject recognition with low training data (CS 1), cross-subject recognition with more training data (CS 2), cross-view recognition with low training data (CV 1), cross-view recognition with more training data (CV 2). Specifically, CS 1 uses 10 subjects for training, CS 2 uses 20 subjects for training, CV 1 uses 1 view for training, and CV 2 uses 2 views for training.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No The paper does not specify the version numbers of software dependencies used in the experiments.
Experiment Setup Yes We set τ to 0.01 as the default for our policy network.