Robust Anomaly Detection in Videos Using Multilevel Representations

Authors: Hung Vu, Tu Dinh Nguyen, Trung Le, Wei Luo, Dinh Phung5216-5223

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our proposed multilevel detector shows a significant improvement in pixel-level Equal Error Rate, namely 11.35%, 12.32% and 4.31% improvement in UCSD Ped 1, UCSD Ped 2 and Avenue datasets respectively. In addition, the model allowed us to detect mislabeled anomalies in the UCDS Ped 1. ... Thorough experiments and analysis show that our multilevel detectors significantly outperform other state-of-the-art anomaly detectors (11.35%, 12.32% and 4.31% improvement in pixel-level Equal Error Rate) in three benchmarks of UCSD Ped 1, UCSD Ped 2 and Avenue datasets.
Researcher Affiliation Academia Center for Pattern Recognition and Data Analytics, Deakin University, Geelong, Australia {hungv, wei.luo}@deakin.edu.au Monash University Clayton, VIC 3800, Australia {Tu.Dinh.Nguyen, trunglm, Dinh.Phung}@monash.edu
Pseudocode Yes Algorithm 1 Combining multilevel detection maps
Open Source Code No The paper provides a link for 'Our new ground-truth is available online 1. 1https://github.com/Sea Otter/vad gan', which is for the dataset (ground-truth), not the source code for the described methodology.
Open Datasets Yes We compare our system with the state-of-the-art methods on three datasets of UCSD Ped 1 (Li, Mahadevan, and Vasconcelos 2014), USCD Ped 2 (Li, Mahadevan, and Vasconcelos 2014) and Avenue (Lu, Shi, and Jia 2013). ... Our new ground-truth is available online 1. 1https://github.com/Sea Otter/vad gan
Dataset Splits No The paper states, 'Each dataset consists of two sets of training videos and testing videos', but it does not provide information about a separate validation set or its split.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Adagrad optimizer (Duchi, Hazan, and Singer 2011)' and 'network architecture and the setting in (Isola et al. 2017)' but does not specify version numbers for any software dependencies or libraries.
Experiment Setup Yes All networks are trained using Adagrad optimizer (Duchi, Hazan, and Singer 2011), γ = 1, a learning rate of 0.1 and 500 epochs. ... We follow the network architecture and the setting in (Isola et al. 2017) to train these models using a learning rate of 0.0002, λ = 100 and the batch size of 1. ... We set the thresholds β = 0.8 and ρ = 0.75 in all experiments.