Appearance-Motion Memory Consistency Network for Video Anomaly Detection
Authors: Ruichu Cai, Hao Zhang, Wen Liu, Shenghua Gao, Zhifeng Hao938-946
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Solid experimental results on various standard datasets validate the effectiveness of our approach. |
| Researcher Affiliation | Academia | 1Guangdong University of Technology, Guangzhou 510006, China. 2Shanghai Tech University, Shanghai Engineering Research Center of Intelligent Vision and Imaging, Shanghai 201210, China. 3Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, China. {cairuichu,haodotzhang}@gmail.com, {liuwen,gaoshh}@shanghaitech.edu.cn, zfhao@gdut.edu.cn |
| Pseudocode | Yes | Algorithm 1 The whole pipeline of our AMMC-Net. |
| Open Source Code | Yes | all codes1 have been released for further research convenience to the community. 1https://github.com/NjuHaoZhang/AMMCNetAAAI2021 |
| Open Datasets | Yes | We conduct the experiments on three challenging video anomaly detection datasets, including UCSD Pedestrian (Ped1 and Ped2) dataset (Li, Mahadevan, and Vasconcelos 2013), CUHK Avenue (Lu, Shi, and Jia 2013) and Shanghai Tech dataset (Luo, Liu, and Gao 2017b). |
| Dataset Splits | No | The paper mentions 'training' and 'testing' phases but does not explicitly state the use of a 'validation' set with specific details or percentages. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instances) used for running its experiments. |
| Software Dependencies | No | The paper mentions the use of 'pre-trained flwonet (Reda et al. 2017)' but does not provide specific version numbers for any software components, libraries, or dependencies. |
| Experiment Setup | No | The paper generally describes the training process (e.g., 'two-stage optimization method consisting of pre-training and joint training') but lacks concrete hyperparameter values or system-level training settings, stating that 'More details can be founded in the supplementary materials.' |