Jointly Modeling Spatio-Temporal Features of Tactile Signals for Action Classification

Authors: Jimmy Lin, Junkai Li, Jiasi Gao, Weizhi Ma, Yang Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on a public action classification dataset demonstrate that our model outperforms state-of-the-art methods in all metrics.
Researcher Affiliation Academia 1 Institute for AI Industry Research (AIR), Tsinghua University, Beijing, China 2 Department of Computer Science and Technology, Tsinghua University, Beijing, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/Aressfull/sock classification.
Open Datasets Yes So our experiments are conducted on the public tactile signal dataset1, which is collected by individuals with two wearable electronic socks to perform specific actions. 1http://senstextile.csail.mit.edu/
Dataset Splits Yes Following the providers settings, 500 and 1,000 samples of each action are used in validation and testing, respectively, and the other samples are used in training (each action type will be sampled to 4,000 samples).
Hardware Specification Yes All experiments are implemented by Pytorch 1.7 and executed on 4 Tesla V100 or Ge Force RTX 3090 GPUs.
Software Dependencies Yes All experiments are implemented by Pytorch 1.7 and executed on 4 Tesla V100 or Ge Force RTX 3090 GPUs.
Experiment Setup Yes Table 3: Summarization of tuned hyper-parameters. The tubelet parameters L and P are set to 5 and 4, and the pretraining and fine-tuning epoch is set to 60. The embedding dimension D is set to 768, in which batch size is 64 and weight decay is 1e-4.