Online Detection of Abnormal Events Using Incremental Coding Length

Authors: Jayanta Dutta, Bonny Banerjee

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on three benchmark datasets and evaluations in comparison with a number of mainstream algorithms show that the approach is comparable to the state-of-the-art.
Researcher Affiliation Academia Jayanta K. Dutta and Bonny Banerjee Institute for Intelligent Systems, and Department of Electrical & Computer Engineering The University of Memphis Memphis, TN 38152, USA {jkdutta, bbnerjee}@memphis.edu
Pseudocode Yes Algorithm 1 Online Dictionary Update
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The UCSD dataset (Mahadevan et al. 2010), UMN dataset (Mehran, Oyama, and Shah 2009), and Subway dataset (Adam et al. 2008) are used, all of which are commonly used benchmark datasets with proper citations.
Dataset Splits Yes UCSD dataset: Ped1 contains 34 training and 36 testing video clips... Ped2 contains 16 training and 12 testing video clips... UMN dataset: ...the first 400 frames of each scene were used to learn the dictionary... The other frames were used for testing. Subway dataset: ...The first ten minutes of each video were used to learn the dictionary...
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions techniques like Orthogonal Matching Pursuit (OMP) and Batch-OMP but does not provide specific version numbers for any software dependencies or libraries used in the implementation.
Experiment Setup No The paper mentions input cuboid sizes (13x13x10 pixels) and frame resolution resizing (320x240 pixels) but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs, optimizer settings) or other detailed system-level training configurations.