See Your Emotion from Gait Using Unlabeled Skeleton Data
Authors: Haifeng Lu, Xiping Hu, Bin Hu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Exhaustive experiments show that CAGE improves existing self-supervised methods by 5% 10% accuracy, and it achieves comparable or even superior performance to supervised methods. Experiments Datasets In this paper, we use Emotion-Gait (E-Gait) datasets (Bhattacharya et al. 2020a) to evaluate our approach. |
| Researcher Affiliation | Academia | Haifeng Lu1, Xiping Hu1,2 , Bin Hu1,2 1 School of Information Science and Engineering, Lanzhou University 2 School of Medical Technology, Beijing Institute of Technology luhf18@lzu.edu.cn, huxp@bit.edu.cn, bh@bit.edu.cn |
| Pseudocode | No | The paper describes algorithms and formulations using text and mathematical equations (e.g., Eq. 1, Eq. 2, Eq. 3, Eq. 4, Eq. 5, Eq. 6, Eq. 7, Eq. 8, Eq. 9, Eq. 10, Eq. 11), but it does not include any formally structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code (e.g., 'We release our code...', 'The source code is available at...') nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | In this paper, we use Emotion-Gait (E-Gait) datasets (Bhattacharya et al. 2020a) to evaluate our approach. |
| Dataset Splits | No | We use a split of 4:1 for training and testing sets and fix it, and this split is used for all experiments. While the paper describes training and testing splits, it does not specify a separate validation set percentage or absolute count for data partitioning. |
| Hardware Specification | Yes | All the experiments are conducted on with the Py Torch framework (Paszke et al. 2019). We train our model on one NVIDIA Titan V GPU. |
| Software Dependencies | No | All the experiments are conducted on with the Py Torch framework (Paszke et al. 2019). While PyTorch is mentioned, a specific version number for PyTorch or any other software dependency (e.g., Python, CUDA) is not provided. |
| Experiment Setup | Yes | For data augmentation, we set the joint jittering j = 10 and the padding ratio γ = 6. For the encoder, A-GCN (Shi et al. 2019) is adopted as the backbone. For contrastive settings, we follow that in Aim CLR (Guo et al. 2022), but the size of the negatives set N is 2,048. In particular, the size of ASMB is the same as the mini-batch size. For GE-CLR, we use SGD (momentum = 0.9 and weight decay = 0.0001) optimizer and train for 200 epochs with a learning rate of 0.1, which is multiplied by 0.1 at epoch 100 and 160. The minibatch size is set to 32. CAGE follows that in GE-CLR, but the model trains for 300 epochs with a learning rate of 0.1 (multiplied by 0.1 at epoch 150 and 250). Linear Evaluation Protocol: The model is trained for 100 epochs with the learning rate 30 (multiplied by 0.1 at epoch 50 and epoch 70). Finetuned Evaluation Protocol: The model is trained for 20 epochs with the learning rate 0.001 (multiplied by 0.1 at epoch 10). |