Action Knowledge Transfer for Action Prediction with Partial Videos

Authors: Yijun Cai, Haoxin Li, Jian-Fang Hu, Wei-Shi Zheng8118-8125

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on the UCF-101 and HMDB-51 datasets show that the proposed action knowledge transfer method can significantly improve the performance of action prediction, especially for the actions with small observation ratios (e.g., 10%).
Researcher Affiliation Academia Yijun Cai,1 Haoxin Li,1 Jian-Fang Hu,2 Wei-Shi Zheng2,3 1School of Electronics and Information Technology, Sun Yat-sen University, China 2School of Data and Computer Science, Sun Yat-sen University, China 3The Key Laboratory of Machine Intelligence and Advanced Computing (Sun Yat-sen University), Ministry of Education
Pseudocode No The paper includes mathematical equations and architectural diagrams (Figure 2, Figure 3) but no explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the availability of its source code.
Open Datasets Yes We test our method on two datasets: UCF-101 (Soomro, Zamir, and Shah 2012) and HMDB-51 (Kuehne et al. 2011).
Dataset Splits Yes Following (Kong, Tao, and Fu 2017; Kong et al. 2018), we use the first 15 groups of videos in UCF-101 split-1 for model training; the next 3 groups for model validation; and the remaining 7 groups for testing. For HMDB-51, We follow the standard evaluation protocol using three training/testing splits, and report the average accuracy over three splits.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or memory used for running experiments.
Software Dependencies No The paper mentions using '3D Res Next-101' and 'Stochastic gradient descent algorithm' but does not provide specific version numbers for any software or libraries (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes Stochastic gradient descent algorithm is employed for optimizing the model parameters, with a batch size of 64 and momentum rate of 0.9. We follow the suggestion in (Wang et al. 2018) and set the margin m and scaling factor s for the AM Softmax to 0.4 and 30, respectively.