Quantifying and Detecting Collective Motion by Manifold Learning
Authors: Qi Wang, Mulin Chen, Xuelong Li
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on real world data sets show that our method is capable of handling crowd scenes with complicated structures and various dynamics, and demonstrate its superior performance against state-of-the-art competitors. In this section, we conduct extensive experiments to evaluate the effectiveness of the proposed method on two aspects: collectiveness measurement and collective motion detection. |
| Researcher Affiliation | Academia | Qi Wang,1 Mulin Chen,1 Xuelong Li2 1School of Computer Science and Center for OPTical IMagery Analysis and Learning (OPTIMAL), Northwestern Polytechnical University, Xi an 710072, Shaanxi, P. R. China 2Center for OPTical IMagery Analysis and Learning (OPTIMAL), Xi an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi an, 710119, Shaanxi, P. R. China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Data Set. Collective Motion Database contains 413 crowd video clips (100 frames per clip) captured from 62 different scenes with various densities and structures. Each video clip is labeled with a ground truth score, which indicates the degree of behavior consistency in the crowd scene. And the clips are sorted in to high, medium, and low collectiveness according to their scores. Collective Motion Database (Zhou et al. 2014). CUHK Crowd Dataset (Shao, Loy, and Wang 2014) and compare it with state-of-the-art competitors. CUHK Crowd Dataset provides 474 crowd videos for group detection, which are captured from realworld crowd scenes with a variety of crowdness. |
| Dataset Splits | Yes | In this training stage, the 100 video clips of the dataset are selected randomly, and 30 frames in each selected clips are used to train the parameters. All the rest frames are used as testing set in the following section. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions algorithms (e.g., EM algorithm, Kalman smoother, g KLT tracker) and general software concepts but does not list specific software names with version numbers needed for replication. |
| Experiment Setup | Yes | For the hidden state-based model, μ is set as [0 0 0]T and the state transition matrix A is initialized by the suboptimal learning method (Chan and Vasconcelos 2008). The covariances Q, R, S are initialized as [1 0 0; 0 1 0; 0 0 0], [0.1 0 0; 0 0.1 0; 0 0 0], and [1 0 0; 0 1 0; 0 0 1]. From Fig. 4(A), it can be seen that the proposed method achieves relatively better performance when k is 20. Thus, k = 20 is the best choice. As shown in Fig. 4(B), we finally choose α = 0.8 in this work. |