Accurate and Steady Inertial Pose Estimation through Sequence Structure Learning and Modulation
Authors: Yinghao Wu, chaoran wang, Lu Yin, Shihui Guo, Yipeng Qin
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments across multiple benchmark datasets demonstrate the superiority of our approach against state-of-the-art methods and has the potential to advance the design of the transformer architecture for fixed-length sequences. |
| Researcher Affiliation | Academia | 1School of Informatics, Xiamen University, China 2School of Computer Science & Informatics, Cardiff University, UK |
| Pseudocode | No | The paper does not contain pseudocode or clearly labeled algorithm blocks. It describes the architecture and processes in text and diagrams. |
| Open Source Code | No | Yes, we will release the code/data later. |
| Open Datasets | Yes | We use the following datasets in our experiments, which can be divided into three categories: 1) Synthetic dataset: AMASS [32]. 2) Real datasets with SMPL [29] skeleton: DIP-IMU [18] and Total Capture [46]. 3) Real datasets with Xsens [41] skeleton: An Dy [33], CIP [37], and Emokine [9]. |
| Dataset Splits | No | The paper mentions training and testing sets (e.g., "fine-tune it on the training set of DIP-IMU, then test it on the test set of DIP-IMU"), but does not explicitly define or specify a separate validation dataset split. |
| Hardware Specification | Yes | We implement our method using the Py Torch [40] framework on one NVIDIA Ge Force RTX 4090 GPU. ... We implement the live demo using a laptop equipped with an Intel Core i9-13900HX Processor CPU and an NVIDIA Ge Force RTX 4060 GPU. |
| Software Dependencies | Yes | We implement our method using the Py Torch [40] framework on one NVIDIA Ge Force RTX 4090 GPU. Py Torch version is 2.0.0, and CUDA version is 11.8. |
| Experiment Setup | Yes | During the training stage, we use the Adam W [30] optimizer to train our model with a batch size of 4096. The learning rate is initialized to 0.0001 and decayed by 0.99 per epoch. |