Unsupervised Representation Learning With Long-Term Dynamics for Skeleton Based Action Recognition
Authors: Nenggan Zheng, Jun Wen, Risheng Liu, Liangqu Long, Jianhua Dai, Zhefeng Gong
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We quantitatively evaluate the effectiveness of our learning approach on three well-established action recognition datasets. Experimental results show that our learned representation is discriminative for classifying actions and can substantially reduce the sequence inpainting errors. |
| Researcher Affiliation | Academia | Nenggan Zheng,1 Jun Wen,2 Risheng Liu,3 Liangqu Long,2 Jianhua Dai,4 Zhefeng Gong5 1 Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, Zhejiang, China 2 College of Computer Science and Techology, Zhejiang University, Hangzhou, Zhejiang, China 3 DUT-RU International School of Information Science & Engineering, Dalian University of Technology, Liaoning, China 4 College of Information Science and Engineering, Hunan Normal University, Changsha, Hunan, China 5 Department of Neurobiology, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China |
| Pseudocode | Yes | Algorithm 1 Training the conditional inpainting model. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We perform our experiments on the following three datasets: the CMU dataset (CMU 2003), the HDM05 dataset (M uller et al. 2007), and the Berkeley MHAD dataset (Ofli et al. 2013). |
| Dataset Splits | Yes | For the entire dataset, the testing protocol is 4-fold cross validation, and for the subset, it is evaluated with 3-fold cross validation. [...] We follow the experimental protocol proposed in (Du, Wang, and Wang 2015) and perform 10-fold cross validation on this dataset. |
| Hardware Specification | No | The paper does not specify any hardware details such as CPU/GPU models, memory, or cloud instances used for running the experiments. |
| Software Dependencies | No | We implement our model in Tensorflow (Abadi et al. 2016) and optimize it with ADAM (Kingma and Ba 2014). (No version numbers provided for TensorFlow or ADAM). |
| Experiment Setup | Yes | We set dropout ratio to be 0.2. [...] We find that smaller λadv helps the Enc to learn a more effective representation, and we set it 0.1 in experiments. [...] Parameters λz controls the weight of z in the total adversarial loss... we set it 0.1. [...] Each layer of the Enc and Dec has 800 hidden units. The Dis network is smaller, with 200 hidden units each layer. |