Self-Adaptive Motion Tracking against On-body Displacement of Flexible Sensors
Authors: Chengxu Zuo, Fang Jiawei, Shihui Guo, Yipeng Qin
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our method is robust across different on-body position configurations. Our dataset and code are available at: https://github.com/Zuo CX1996/Self Adaptive-Motion-Tracking-against-On-body-Displacement-of-Flexible-Sensors. (Abstract) and the presence of Section 5 "Experimental Results" clearly indicates empirical study. |
| Researcher Affiliation | Academia | 1School of Informatics, Xiamen University, China 2School of Computer Science & Informatics, Cardiff University, UK |
| Pseudocode | Yes | Algorithm 1 Unsupervised Adaptation to Displacements |
| Open Source Code | Yes | Our dataset and code are available at: https://github.com/Zuo CX1996/Self Adaptive-Motion-Tracking-against-On-body-Displacement-of-Flexible-Sensors. |
| Open Datasets | Yes | Our dataset and code are available at: https://github.com/Zuo CX1996/Self Adaptive-Motion-Tracking-against-On-body-Displacement-of-Flexible-Sensors. (Abstract) The dataset used in this paper consists of sensor readings and joint angles collected by a single user wearing the augmented elbow pad (Sec. 3) while performing elbow flexion... We have obtained ethical approval for the publication of both datasets and results. |
| Dataset Splits | Yes | Among the 11 groups, we randomly selected 5 of them as the training set Dtrain and used the rest 6 groups as the test set Dtest. |
| Hardware Specification | Yes | All experiments were conducted on a desktop PC with an AMD Ryzen 3950X CPU and an NVIDIA RTX 3080 GPU. |
| Software Dependencies | No | The paper mentions software components like "Adam optimizer" and "LSTM network" but does not provide specific version numbers for these or any other libraries or frameworks (e.g., PyTorch, TensorFlow) used in the experiments, which are necessary for full reproducibility. |
| Experiment Setup | Yes | For the supervised pretraining, we employ an MSE loss: ... We use an Adam optimizer with a learning rate of 1e-3, β1 = 0.9, β2 = 0.999, and training epoch e = 30. For the adaptation, we use an Adam optimizer with a learning rate of 5e-3 and a weight decay of 0.001, β1 = 0.9, β2 = 0.999, and training epoch e = 20. We use n = 10 for both pretraining and adaptation. |