Region-Based Quality Estimation Network for Large-Scale Person Re-Identification

Authors: Guanglu Song, Biao Leng, Yu Liu, Congrui Hetang, Shaofan Cai

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the performance of RQEN on two publicly available video datasets and proposed large-scale dataset for video-based person re-id: the PRID 2011 dataset (Hirzer et al. 2011), i LIDS-VID dataset (Wang et al. 2014) ,MARS (Zheng et al. 2016)and LPW.
Researcher Affiliation Academia Guanglu Song,1 Biao Leng,1 Yu Liu,2 Congrui Hetang,1 Shaofan Cai1 1 School of Computer Science and Engineering, Beihang University, Beijing 100191, China 2 The Chinese University of Hong Kong, Hong Kong
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper provides a link to a dataset (http://liuyu.us/ dataset/lpw/index.html) but does not provide concrete access to the source code for the methodology described in the paper.
Open Datasets Yes We propose a new dataset named the Labeled Pedestrian in the Wild (LPW) . It contains 2,731 pedestrians in three different scenes where each annotated identity is captured by from 2 to 4 cameras. The LPW features a notable scale of 7,694 tracklets with over 590,000 images as well as the cleanliness of its tracklets. It distinguishes from existing datasets in three aspects: large scale with cleanliness, automatically detected bounding boxes and far more crowded scenes with greater age span. This dataset provides a more realistic and challenging benchmark, which facilitates the further exploration of more powerful algorithms. It can be available on http://liuyu.us/ dataset/lpw/index.html.
Dataset Splits No The paper describes training and test splits for datasets, but does not explicitly detail a separate validation set with specific percentages or counts for hyperparameter tuning. For example, it states, 'In the LPW, the second scene and the third scene with a total of 1,975 people are used for training, and the first scene is tested with 756 people.' No dedicated 'validation' split is specified.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper mentions `τ` as the margin of triplet loss but does not provide its specific value, nor other hyperparameters like learning rate, batch size, or optimizer settings. It states: 'Lt = [d(Fw(So i ), Fw(S+ i )) d(Fw(So i ), Fw(S i )) + τ]+ (3) where the d( ) is L2-norm distances, [ ]+ indicates max( , 0) and τ is the margin of triplet loss.'