Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Video-Based Person Re-Identification by Simultaneously Learning Intra-Video and Inter-Video Distance Metrics

Authors: Xiaoke Zhu, Xiao-Yuan Jing, Fei Wu, Hui Feng

IJCAI 2016 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the public i LIDS-VID and PRID 2011 image sequence datasets show that our approach achieves the stateof-the-art performance.
Researcher Affiliation Academia Xiaoke Zhu1,3, Xiao-Yuan Jing ,1,2, Fei Wu2,1, Hui Feng1 1State Key Laboratory of Software Engineering, School of Computer, Wuhan University, China 2College of Automation, Nanjing University of Posts and Telecommunications, China 3 School of Computer and Information Engineering, Henan University, China
Pseudocode Yes Algorithm 1 Simultaneous intra-video and inter-video distance learning (SI2DL)
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes To evaluate the effectiveness of our approach, we conduct extensive experiments on two publicly available person sequence datasets, including i LIDS-VID [Wang et al., 2014] and PRID 2011 [Hirzer et al., 2011].
Dataset Splits Yes In experiments, we choose these parameters by 5-fold cross-validation on each dataset.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Parameter Settings. There are three parameters in our SI2DL model, i.e., ยต, 1 and 2. In experiments, we choose these parameters by 5-fold cross-validation on each dataset. With respect to K1 and K2, we set them as (2200, 80) for i LIDS-VID, and (2500, 100) for PRID 2011, respectively. ... The parameters ยต, 1 and 2 are set as 0.00005, 0.2 and 0.2, respectively. ... The parameters ยต, 1 and 2 are set as 0.00005, 0.1 and 0.1, respectively.