Intentional Evolutionary Learning for Untrimmed Videos with Long Tail Distribution

Authors: Yuxi Zhou, Xiujie Wang, Jianhua Zhang, Jiajia Wang, Jie Yu, Hao Zhou, Yi Gao, Shengyong Chen

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments on two untrimmed video datasets (THUMOS14 and Activity NET v1.3), and our method has achieved excellent results compared to SOTA methods. ... We demonstrate the effectiveness and advancement of our proposed method on THUMOS14 and Activity NET v1.3 datasets.
Researcher Affiliation Academia 1Department of Computer Science, Tianjin University of Technology, Tianjin, China 2DCST, BNRist, RIIT, Institute of Internet Industry, Tsinghua University, Beijing, China
Pseudocode No The paper describes algorithms in text and through architectural diagrams but does not contain a formal pseudocode block or algorithm listing.
Open Source Code Yes The code and supplementary materials are available at https://github.com/Jennifer123www/Untrimmed Video.
Open Datasets Yes We use two popular untrimmed human action datasets, THUMOS14 (Idrees et al. 2017) and Activity NET v1.3 (Caba Heilbron et al. 2015), as our benchmark datasets.
Dataset Splits Yes The THUMOS14 dataset... we use the 200 videos in the validation set for training... Activity NET v1.3... 10,024 training videos and 4,926 validation videos. Following (Yang et al. 2021; Luo et al. 2021), we use the training set as current data stream to train our model and the validation set as new data stream for evaluation.
Hardware Specification No The paper states 'Details of experimental parameters are provided in the supplementary material' for implementation details, but no specific hardware specifications (e.g., CPU, GPU models, memory) are mentioned in the main text.
Software Dependencies No The paper mentions using the 'I3D (Carreira and Zisserman 2017) model to extract video features' and 'Stacked RNN' but does not specify version numbers for any software components (e.g., Python, PyTorch, specific libraries).
Experiment Setup Yes Taking into account the calculation cost and performance results, we chose length=100 and stride=20 as our experimental settings. ... Based on the above considerations, we selected fusion frequency=10 as our experimental setting.