TEINet: Towards an Efficient Architecture for Video Recognition
Authors: Zhaoyang Liu, Donghao Luo, Yabiao Wang, Limin Wang, Ying Tai, Chengjie Wang, Jilin Li, Feiyue Huang, Tong Lu11669-11676
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to verify the effectiveness of TEINet on several benchmarks (e.g., Something-Something V1&V2, Kinetics, UCF101 and HMDB51). Our proposed TEINet can achieve a good recognition accuracy on these datasets but still preserve a high efficiency. |
| Researcher Affiliation | Collaboration | 1State Key Lab for Novel Software Technology, Nanjing University, China 2Youtu Lab, Tencent |
| Pseudocode | No | The paper describes its method in text and figures, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing the source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | Something-Something V1&V2. (Goyal et al. 2017) is a large collection of video clips containing daily actions interacting with common objects. ... Kinetics-400. (Kay et al. 2017) is a large-scale dataset in action recognition... UCF101 (Soomro, Zamir, and Shah 2012) and HMDB51 (Kuehne et al. 2011). |
| Dataset Splits | No | The paper does not provide explicit details on how the training, validation, and test splits were created for Something-Something or Kinetics, only mentioning 'three splits' for UCF101/HMDB51 without further definition. |
| Hardware Specification | Yes | For all of our experiments, we utilize SGD with momentum 0.9 and weight decay of 1e-4 to train our TEINet on Tesla M40 GPUs using a mini batch size of 64. ... by using a single NVIDIA Tesla P100 GPU to measure the latency and throughput. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | On the Kinetics dataset, we train our models for 100 epochs in total, starting with a learning rate of 0.01 and reducing to its 1/10 at 50, 75, 90 epochs. For all of our experiments, we utilize SGD with momentum 0.9 and weight decay of 1e-4 to train our TEINet on Tesla M40 GPUs using a mini batch size of 64. |