MGFN: Magnitude-Contrastive Glance-and-Focus Network for Weakly-Supervised Video Anomaly Detection

Authors: Yingxian Chen, Zhengzhe Liu, Baoheng Zhang, Wilton Fok, Xiaojuan Qi, Yik-Chung Wu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on two large-scale benchmarks UCF-Crime and XD-Violence manifest that our method outperforms state-of-the-art approaches.
Researcher Affiliation Academia 1Department of Electrical and Electronic Engineering, The University of Hong Kong 2The Chinese University of Hong Kong {carolcyx, cheungbh}@hku.hk, {wtfok, xjqi, ycwu}@eee.hku.hk, zzliu@cse.cuhk.edu.hk
Pseudocode No The paper describes the approach using text, figures, and equations, but does not provide any structured pseudocode or algorithm blocks.
Open Source Code Yes The codes are available in https: //github.com/carolchenyx/MGFN.git.
Open Datasets Yes We consider two benchmarks in our analysis, UCF-Crime (Sultani, Chen, and Shah 2018) and XD-Violence (Wu et al. 2020).
Dataset Splits No The paper does not explicitly state specific percentages or methods for training, validation, and test splits, nor does it cite a source that defines these splits for the datasets used.
Hardware Specification No The paper states 'Our proposed method is implemented in Py Torch' but does not specify any hardware details such as GPU models, CPU types, or memory.
Software Dependencies No The paper states 'Our proposed method is implemented in Py Torch (Paszke et al. 2019)' but does not provide a specific version number for PyTorch or other libraries.
Experiment Setup Yes The hyperparameters are set as 𝑇= 32, 𝑃= 10, 𝛼= 0.1, 𝑘= 3, 𝜆1 = 𝜆2 = 1, 𝜆3 = 0.001. To train the network, we used Adam optimiser (Kingma and Ba 2015) with a weight decay of 0.0005 and a learning rate of 0.001. The batch size 𝐵 in the training is 16.