Hierarchical Contrast for Unsupervised Skeleton-Based Action Representation Learning
Authors: Jianfeng Dong, Shengkai Sun, Zhonglin Liu, Shujie Chen, Baolong Liu, Xun Wang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four datasets, i.e., NTU60, NTU-120, PKU-I and PKU-II, show that Hi Co achieves a new state-of-the-art for unsupervised skeleton-based action representation learning in two downstream tasks including action recognition and retrieval, and its learned action representation is of good transferability. Besides, we also show that our framework is effective for semi-supervised skeleton-based action recognition. |
| Researcher Affiliation | Academia | 1College of Computer Science and Technology, Zhejiang Gongshang University, China 2Zhejiang Key Lab of E-Commerce, China |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Our code is available at https://github.com/Hui Guan Lab/Hi Co. |
| Open Datasets | Yes | Experiments are conducted on four popular skeleton-based action datasets, i.e., NTU-60 (Shahroudy et al. 2016), NTU-120 (Liu et al. 2019), PKU-I, and PKU-II (Liu et al. 2020). |
| Dataset Splits | Yes | On NTU-60 and NTU-120, we follow two standard evaluation protocols: cross-subject (xsub), and cross-view (x-view). Following (Lin et al. 2020), we report the x-sub results on PKU I and II. |
| Hardware Specification | No | No specific hardware (e.g., GPU model, CPU, memory) used for experiments was mentioned. |
| Software Dependencies | No | No specific software dependencies with version numbers were mentioned. |
| Experiment Setup | No | The paper does not provide specific hyperparameters like learning rate, batch size, or optimizer settings for the experimental setup. |