Keypoint Message Passing for Video-Based Person Re-identification
Authors: Di Chen, Andreas Doering, Shanshan Zhang, Jian Yang, Juergen Gall, Bernt Schiele239-247
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our model significantly improves the baseline, achieving results on par with or better than the current state-of-the-art models. |
| Researcher Affiliation | Academia | 1 PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, Jiangsu Key Lab of Image and Video Understanding for Social Security, Nanjing University of Science and Technology 2 University of Bonn 3 Max Planck Institute for Informatics |
| Pseudocode | No | The paper describes the method and architecture through text and diagrams, but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The new dataset and code will be released at https://github.com/Dean Chan/Keypoint Message Passing. |
| Open Datasets | Yes | MARS (Zheng et al. 2016a) is a large-scale benchmark dataset for video-based person re-ID. and Pose Track Re ID is a new dataset proposed in this work to facilitate more comprehensive experiments for videobased person re-ID. It is a cropped subset of the Pose Track 2018 dataset (Andriluka et al. 2018) |
| Dataset Splits | No | For MARS: 'The training set consists of 8, 298 tracklets of 625 identities, while the testing set includes 1, 980 tracklets of 626 identities for query and 6, 082 tracklets of 620 identities for gallery.' For Pose Track Re ID: 'The training set of Pose Track Re ID is gathered from the training set of Pose Track 2018, including 7, 725 tracklets of 5, 350 identities. The query set consists of 847 tracklets of 830 identities, while the gallery set includes 1, 965 tracklets of 1, 696 identities.' No explicit separate validation set is described with specific split information for their experiments. |
| Hardware Specification | No | No specific hardware details such as GPU models, CPU types, or memory amounts used for running experiments are mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as programming language versions or library names with their version numbers. |
| Experiment Setup | Yes | The dimension for the latent node features of GCN is set to 64. During training, both of the branches are supervised with cross-entropy losses. We choose Res Net-50 (He et al. 2016) as the base CNN for the visual branch. We adopt the 28-layer GCN model in (Li et al. 2020) and remove the first graph convolution layer to match the visual branch. |