CHASE: Learning Convex Hull Adaptive Shift for Skeleton-based Multi-Entity Action Recognition

Authors: Yuhang Wen, Mengyuan Liu, Songtao Wu, Beichen Ding

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on six datasets, including NTU Mutual 11/26, H2O, Assembly101, Collective Activity and Volleyball, consistently verify our approach by seamlessly adapting to single-entity backbones and boosting their performance.
Researcher Affiliation Collaboration Yuhang Wen Sun Yat-sen University wenyh29@mail2.sysu.edu.cn Mengyuan Liu State Key Laboratory of General Artificial Intelligence Peking University, Shenzhen Graduate School nkliuyifang@gmail.com Songtao Wu Sony R&D Center China Songtao.Wu@sony.com Beichen Ding Sun Yat-sen University dingbch@mail.sysu.edu.cn
Pseudocode Yes Algorithm 1 CHASE Wrapper: Py Torch-like Pseudo Code
Open Source Code Yes Our code is publicly available at https://github.com/Necolizer/CHASE.
Open Datasets Yes We conduct experiments on six multi-entity action recognition datasets. ...NTU Mutual 11 and NTU Mutual 26, respectively subsets of NTU RGB+D [41] and NTU RGB+D 120 [42]... H2O [13]... Assembly101 (ASB101) [12]... Collective Activity Dataset (CAD) [85]... Volleyball Dataset (VD) [86].
Dataset Splits Yes NTU Mutual 11 adopts the widely-used X-Sub and X-View criteria, while NTU Mutual 26 follows the X-Sub and X-Set criteria. ...We follow the training, validation, and test splits outlined in [13] in our experiments [for H2O]. ...We follow the training, validation, and test splits described in [12] for evaluations [for Assembly101].
Hardware Specification Yes Experiments are conducted on the Ge Force RTX 3070 GPUs with Py Torch. ...Experiments are conducted with 8 Ge Force RTX 3070 GPUs (GPU Memory: 8GB)...
Software Dependencies Yes using torch version 1.9.0+cu111, torchvision version 0.10.0+cu111, and CUDA version 11.4.
Experiment Setup Yes For CTR-GCN in NTU Mutual 26, we adopt input shape X R3 64 25 2, segment size (1, 1, 1) and λ = 0.1 in CHASE. SGD optimizer is used with Nesterov momentum of 0.9, a initial learning rate of 0.1 and a decay rate 0.1 at the 80th and 100th epoch. Batch size is set to 64. More detailed configurations for each model are provided in the Appendix.