Discriminative Dictionary Learning With Ranking Metric Embedded for Person Re-Identification
Authors: De Cheng, Xiaojun Chang, Li Liu, Alexander G. Hauptmann, Yihong Gong, Nanning Zheng
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on three widely used person Re-Id benchmark datasets, and achieve state-of-the-art performances. |
| Researcher Affiliation | Collaboration | De Cheng1,2, Xiaojun Chang2, Li Liu3, Alexander G. Hauptmann2, Yihong Gong1 , Nanning Zheng1. Xi an Jiaotong University, China1, Carnegie Mellon University,USA2, Malong Technologies Co. Ltd3. |
| Pseudocode | Yes | Algorithm 1: The Ranking Metric Embedded Discriminative Dictionary Learning Method |
| Open Source Code | No | The paper does not provide any links or explicit statements about the availability of its source code. |
| Open Datasets | Yes | In this section, we use three widely used person Re-Id benchmark datasets, namely VIPe R, 3DPES and CUHK03, for performance evaluations. All the datasets contain a set of persons, each of whom has several images captured by different cameras. |
| Dataset Splits | Yes | The dataset is separated into the training and test set, where images of the same person can only appear in either set. The test set is further divided into the probe and gallery set, and two sets contain the different images of a same person. In the VIPe R and 3DPES datasets, half of the identities are used as training or test set, while in the CUHK03 dataset, 100 pedestrians are used as the test set, and the rest are used as the training set. |
| Hardware Specification | No | The paper does not explicitly state any specific hardware specifications (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions features like 'Res Net152' and 'handcraft features' but does not specify any software libraries with version numbers (e.g., TensorFlow 2.x, PyTorch 1.x) that were used for implementation. |
| Experiment Setup | Yes | Parameter Setting: We empirically set the dictionary size for D in Eq. (5) as K = 200. The parameters τ, α, γ and β are set to 1.0, 0.25, 0.1 and 0.7, respectively. The learning rate starts with η = 0.01, then at each iteration, we increase η by a factor of 1.2 if the loss function decreased and decrease η by a factor of 0.8 if the loss increased. |