Attentive Temporal Pyramid Network for Dynamic Scene Classification
Authors: Yuanjun Huang, Xianbin Cao, Xiantong Zhen, Jungong Han8497-8504
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and comparisons are conducted on three benchmark datasets and the results show our superiority over the state-of-the-art methods on all these three benchmark datasets. |
| Researcher Affiliation | Academia | Yuanjun Huang,1,2,3,4 Xianbin Cao,1,3,4 Xiantong Zhen,1,3,4 Jungong Han2 1School of Electronics and Information Engineering, Beihang University, Beijing, 100191, China 2Lancaster University, Lancaster, LA1 4YW, UK 3Key Laboratory of Advanced technology of Near Space Information System (Beihang University), Ministry of Industry and Information Technology of China 4Beijing Advanced Innovation Center for Big Data-Based Precision Medicine |
| Pseudocode | No | The paper describes methods with text and equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete statement or link regarding the release of source code for the described methodology. |
| Open Datasets | Yes | We evaluate our proposed method on three benchmark datasets: YUPENN++ dataset (Feichtenhofer, Pinz, and Wildes 2017) , Maryland dataset (Shroff, Turaga, and Chellappa 2010) and Acivity Net dataset (Heilbron et al. 2015). |
| Dataset Splits | Yes | It consists of 10024, 4926 and 5044 videos in training, validation and test sets, respectively. |
| Hardware Specification | Yes | all experiments are conducted on a workstation with an Intel Core I7 CPU and a NIVDIA Titan X GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch language' but does not provide a specific version number or other software dependencies with their versions. |
| Experiment Setup | Yes | The training procedure of ATP-Net follows standard Conv Net training (Huang et al. 2017a) (Zhou et al. 2016), with learningrate = 0.001,batchsize = 32 and momentum learning algorithm. |