Learning Diverse Stochastic Human-Action Generators by Learning Smooth Latent Transitions
Authors: Zhenyi Wang, Ping Yu, Yang Zhao, Ruiyi Zhang, Yufan Zhou, Junsong Yuan, Changyou Chen12281-12288
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show the superiority of our method in both diverse action-sequence generation and classification, relative to existing methods. |
| Researcher Affiliation | Academia | 1State University of New York at Buffalo 2Duke University 1{zhenyiwa, pingyu, yzhao63, yufanzho, jsyuan, changyou}@buffalo.edu 2ryzhang@cs.duke.edu |
| Pseudocode | Yes | Finally, the whole training procedure of our model is described in Algorithm (See Appendix). |
| Open Source Code | Yes | Code is also made available3. https://github.com/zheshiyige/Learning-Diverse-Stochastic Human-Action-Generators-by-Learning-Smooth-Latent Transitions |
| Open Datasets | Yes | We adopt the human-3.6m dataset (Catalin Ionescu and Sminchisescu 2014) and the NTU dataset (Shahroudy et al. 2016). |
| Dataset Splits | Yes | For the cross-subject evaluation, sequences for training (20 subjects) and testing (20 subjects) come from different subjects. ... After splitting and cleaning missing or incomplete sequences, there are 2260 and 1070 action sequences for training and testing, respectively, for cross-subject evaluation; and there are 2213 and 1117 action sequences for training and testing, respectively, for cross-view evaluation. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not specify version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | No | The paper mentions 'implementation details' are provided in the Appendix, but the main text does not contain specific experimental setup details such as hyperparameter values or training configurations. |