Fast Vehicle Identification in Surveillance via Ranked Semantic Sampling Based Embedding

Authors: Feng Zheng, Xin Miao, Heng Huang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate that RSS outperforms the state-of-the-art approaches and the learned embedding from one dataset can be transferred to achieve the task of vehicle Re-ID on another dataset.
Researcher Affiliation Academia Feng Zheng1, Xin Miao2, Heng Huang1 1 Department of Electrical and Computer Engineering, University of Pittsburgh 2 Department of Computer Sciences & Engineering, University of Texas at Arlington
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes Comp Cars [Yang et al., 2015] is originally collected for the tasks of fine-grained categorization and verification. and Ve Ri [Liu et al., 2016b] is collected from real-world urban surveillance scenes
Dataset Splits Yes 2, 000 images of each view are randomly selected for testing and the remaining samples are used for training. (for Comp Cars) and 37,778 images from 576 vehicles are used for training while the remaining 13,257 images from the 200 vehicles were used for testing. (for Ve Ri)
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes The balance parameter in the RSS model is set to 0.1. and The binary deep architecture shown in Fig. 2 consists of three components: a shared deep architecture based on the Goog Le Net [Szegedy et al., 2015] style Inception models, a view-specific fully connected layer, and a binarization layer. The view-specific embedding layer consists of 640 cells, which are fully connected to the previous layer. The 640 units are divided into 5 groups, each of 128 units corresponds to a view. In the learning phase, the first two components need to be updated using objective in 7. Batch normalization is used for each mini-batch.