Learning Event-Relevant Factors for Video Anomaly Detection
Authors: Che Sun, Chenrui Shi, Yunde Jia, Yuwei Wu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The extensive experiments show the effectiveness of our method for video anomaly detection. |
| Researcher Affiliation | Academia | 1Beijing Key Laboratory of Intelligent Information Technology, School of Computer Science & Technology, Beijing Institute of Technology, China 2Guangdong Laboratory of Machine Perception and Intelligent Computing, Shenzhen MSU-BIT University, China |
| Pseudocode | No | The paper does not include pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We conduct experiments on three common benchmark datasets, including Shanghai Tech (Luo, Liu, and Gao 2017), CUHK Avenue (Lu, Shi, and Jia 2013), and UCSD Ped2 (Mahadevan et al. 2009). |
| Dataset Splits | No | The paper specifies training and testing scenarios ('Setting-A (big data scenarios): ...all training samples are used for training.', 'Setting-B (small data scenarios): ...We randomly select 10% of the training samples to form sub-datasets...', 'tested on the whole test dataset') but does not explicitly mention a separate validation split or dataset. |
| Hardware Specification | Yes | The experiment results are obtained on a single NVIDIA RTX3090 GPU and an Intel i9-10900X CPU, and we do not consider the pre-processing time of the object detection and optical flow estimation. |
| Software Dependencies | No | We use Py Torch (Paszke et al. 2017) to train our model and adopt the Adam optimizer (Kingma and Ba 2015) with β1 = 0.9 and β2 = 0.999 to optimize it. While PyTorch is mentioned, a specific version number is not provided. |
| Experiment Setup | Yes | The batch size, epoch number and initialized learning rate are set to (128, 80, 1e-4) and (128, 40, 8e-5) for training the causal generative model and finetuning the predictor, respectively. The learning rate is decayed by 0.8 after every 40 epochs. The margin parameter ϵ is set to 1 in the Shanghai Tech dataset and is set to 0.5 in the CUHK Avenue and UCSD Ped2 datasets. λce is a trade-off parameter and is set to 0.001. λdis is a trade-off parameter and is set to 0.5. |