Crowd-Level Abnormal Behavior Detection via Multi-Scale Motion Consistency Learning

Authors: Linbo Luo, Yuanjing Li, Haiyan Yin, Shangwei Xie, Ruimin Hu, Wentong Cai

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For the empirical study, we consider three large-scale crowd event datasets, UMN, Hajj and Love Parade. Experimental results show that MSMC-Net could substantially improve the state-of-the-art performance on all the datasets. We present an extensive empirical evaluation study, where we implement five related baselines, adopt three datasets and demonstrate that our method leads to superior performance consistently across all the datasets. In our evaluation, ten independent training and testing runs for each method are performed. The average results of different VAD methods in terms of AUC and EER are shown in Table 1. Ablation Study Effectiveness of Motion Consistency Representation. Effectiveness of Multi-scale Learning.
Researcher Affiliation Collaboration 1 School of Cyber Engineering, Xidian University 2 Sea AI Lab 3 School of Computer Science and Engineering, Nanyang Technological University
Pseudocode Yes Algorithm 1: Reconstruction procedure of our MSMC-Net
Open Source Code No The paper does not explicitly state that source code for their methodology is released, nor does it provide a direct link to a code repository for their work. The only external link is to an arXiv appendix, which typically contains additional paper content, not code.
Open Datasets Yes We evaluate the performance of our method on three publicly available datasets, UMN, Hajj and Love Parade, which contain crowd-level abnormal behaviors including crowd escaping, counter flow and crowd turbulence. Note that our work is the first to introduce Hajj and Love Parade for VAD study. More details on these datasets are described in Appendix.
Dataset Splits No The paper mentions training and testing but does not explicitly state the specific dataset splits (e.g., percentages or counts) for training, validation, and testing. It says 'In our evaluation, ten independent training and testing runs for each method are performed.' but no detailed split information.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9) needed to replicate the experiment.
Experiment Setup No The paper does not contain specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or training configurations in the main text. It mentions 'Details on these baseline methods and our settings are in Appendix.', implying some settings might be there, but not directly in the main text.