Learning Graph-based Residual Aggregation Network for Group Activity Recognition
Authors: Wei Li, Tianzhao Yang, Xiao Wu, Zhaoquan Yuan
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on two popular benchmarks for group activity recognition clearly demonstrate the superior performance of our method in comparison with the state-of-the-art methods. |
| Researcher Affiliation | Academia | Wei Li , Tianzhao Yang , Xiao Wu and Zhaoquan Yuan School of Computing and Artificial Intelligence, Southwest Jiaotong University liwei@swjtu.edu.cn, tianzhao@my.swjtu.edu.cn, wuxiaohk@gmail.com, zqyuan@swjtu.edu.cn |
| Pseudocode | No | No pseudocode or clearly labeled algorithm block is provided in the paper. |
| Open Source Code | No | No statement or link is provided indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Two popular benchmarks (Volleyball Dataset (VD) [Ibrahim et al., 2016] and Collective Activity Dataset (CAD) [Choi et al., 2009]) are used to evaluate our proposed method |
| Dataset Splits | No | The paper does not explicitly provide specific percentages or sample counts for training, validation, and test splits, nor does it reference predefined splits with explicit details. It states: "randomly sampling frames from a video clip are selected as the training samples on both two datasets". |
| Hardware Specification | No | No specific hardware details (e.g., GPU or CPU models, memory specifications) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions "Our model is implemented based on Pytorch" but does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The ADAM optimizer with different learning rates is used to learn the network parameters. The hyper-parameters for ADAM are set as β1 = 0.9, β2 = 0.999 and ϵ = 10 8. For the training of 40 epochs on VD, the initial learning rate is set to 1 10 4 with the dacay rate is 1/3 every 10 epochs. For the training of 30 epochs on CAD, the learning rates are set to 4 10 5 and 1 10 4 for Res Net18 and VGG16, respectively. The spatial constraint factors δ are set to 0.2, 0.3 of the image width in the training of VD and CAD, empirically. L is set to 16. The batch sizes are set to 2 on both datasets. |