A Deep Bi-directional Attention Network for Human Motion Recovery
Authors: Qiongjie Cui, Huaijiang Sun, Yupeng Li, Yue Kong
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on CMU database demonstrate that the proposed model consistently outperforms other state-of-the-art methods in terms of recovery accuracy and visualization. |
| Researcher Affiliation | Academia | Qiongjie Cui , Huaijiang Sun , Yupeng Li and Yue Kong Nanjing University of Science and Technology, Nanjing, China {cuiqiongjie,sunhuaijiang}@njust.edu.cn, starli777@hotmail.com, codekong1028@163.com |
| Pseudocode | No | No pseudocode or clearly labeled algorithm block was found in the paper. |
| Open Source Code | Yes | The code will be aviliable on the page: http://mocap.ai. |
| Open Datasets | Yes | In this paper, we use CMU mocap database with 31 joint markers for the human body. |
| Dataset Splits | Yes | where the λrec = 0.95 and λbone = 0.05 are the trade-off hyperparameters to fine-tune the importance of each loss term. They are determined by 10-fold cross validation. |
| Hardware Specification | No | No specific hardware details (like CPU/GPU models, memory) used for running the experiments were provided. |
| Software Dependencies | No | The paper mentions training with Adam and using dropout, but does not specify any software dependencies (e.g., libraries, frameworks) with version numbers. |
| Experiment Setup | Yes | Our network uses BLSTM as decoder and encoder where each LSTM has 512 hidden units. The BAN model is trained using Adam [Kingma and Ba, 2014] with a learning rate of 0.001, and a more efficient mini-batch size 128 is applied to optimize the network. In our work, we use dropout [Srivastava et al., 2014] as the regularization method on the LSTM layer and the penultimate layer. With the dropout rate setting to 0.4, the model has better generalization performance. |