Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-Supervised Action Recognition
Authors: Tianyu Guo, Hong Liu, Zhan Chen, Mengyuan Liu, Tao Wang, Runwei Ding762-770
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Exhaustive experiments on NTU RGB+D 60, PKU-MMD, NTU RGB+D 120 datasets have verified that our Aim CLR can significantly perform favorably against state-of-the-art methods under a variety of evaluation protocols with observed higher quality action representations. |
| Researcher Affiliation | Academia | 1 Key Laboratory of Machine Perception, Peking University, Shenzhen Graduate School, China 2 The School of Intelligent Systems Engineering, Sun Yat-sen University, China |
| Pseudocode | Yes | Algorithm 1: Energy-based attention-guided drop module. |
| Open Source Code | Yes | Our code is available at https://github.com/Levigty/Aim CLR. |
| Open Datasets | Yes | PKU-MMD Dataset (Liu et al. 2020): NTU RGB+D 60 Dataset (Shahroudy et al. 2016): NTU RGB+D 120 Dataset (Liu et al. 2019): |
| Dataset Splits | No | The paper describes training and test splits, but does not provide specific details for a separate validation split, such as exact percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments, only mentioning the PyTorch framework. |
| Software Dependencies | No | All the experiments are conducted on the Py Torch (Paszke et al. 2019) framework. The paper mentions PyTorch but does not specify a version number or other software dependencies with version numbers. |
| Experiment Setup | Yes | The mini-batch size is set to 128. Specifically, the feature dimension is 128, the size of the memory bank is 32768, the momentum coefficient m is set to 0.999, and the temperature hyper-parameter τ is set to 0.07. For optimization, we use SGD with momentum (0.9) and weight decay (0.0001). The model is trained for 300 epochs with a learning rate of 0.1 (decreases to 0.01 at epoch 250). |