Learning Discriminative Activated Simplices for Action Recognition
Authors: Chenxu Luo, Chang Ma, Chunyu Wang, Yizhou Wang
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We justify the power of the model on benchmark datasets and witness consistent performance improvements. In this section, we evaluate the action recognition performance on three popular benchmark datasets (Li, Zhang, and Liu 2010) (Seidenari et al. 2013) (Xia, Chen, and Aggarwal 2012) and a self-composed large dataset. |
| Researcher Affiliation | Collaboration | Chenxu Luo,1 Chang Ma,1 Chunyu Wang,2 Yizhou Wang,1 1Nat l Eng. Lab. for Video Technology, Cooperative Medianet Innovation Center Key Lab. of Machine Perception (Mo E), Sch l of EECS, Peking University, Beijing, 100871, China 2Microsoft Research |
| Pseudocode | No | The paper describes the optimization steps in text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions using 'the code provided by (Wang et al. 2016)' but does not provide any statement or link indicating that the source code for the methodology described in this paper is openly available. |
| Open Datasets | Yes | In this section, we evaluate the action recognition performance on three popular benchmark datasets (Li, Zhang, and Liu 2010) (Seidenari et al. 2013) (Xia, Chen, and Aggarwal 2012) and a self-composed large dataset. |
| Dataset Splits | Yes | For the MSR-Action3D dataset: 'Most existing works choose five subjects for training and the remaining five subjects for testing, e.g. in (Li, Zhang, and Liu 2010), and report the result based on a single split. ... we experiment with all 252 possible subject splits and report the average accuracy.' For the Florence dataset: 'Following the data suggestion, we adopt a leave-one-actor-out protocol: we train the classifier using all the sequences from nine out of ten actors and test on the remaining one.' For the UTKinect dataset: 'We use the common leave-one-sequence-out evaluation criterion to report the performance. More specifically, one sequence is used for testing and the rest of the sequences are used for training.' |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers needed to replicate the experiment. |
| Experiment Setup | Yes | An action instance is a 3D pose sequence... we adopt the action-snippet representation which encodes the temporal order of poses by combining ten consecutive poses together as an element: yμ = [yμ yμ+9]... For MSR-Action3D, 'We learned 20 bases for each class and 5 shared bases.' For Florence, 'We set the number of bases for each class to be 30 and the number of common bases to be 5(275 in total) by crossvalidation.' For UTKinect, 'We learn 20 bases for each class and 5 bases for the common part'. Penalty coefficients: 'In practice,λ1 is set to be around 0.1, and λ2 is set to be around 0.01 .' |