Robust Multi-Modality Person Re-identification

Authors: Aihua Zheng, Zi Wang, Zihan Chen, Chenglong Li, Jin Tang3529-3537

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments on RGBNT201 dataset comparing to the state-of-the-art methods demonstrate the contribution of multi-modality person Re-ID and the effectiveness of the proposed approach, which launch a new benchmark and a new baseline for multi-modality person Re-ID.
Researcher Affiliation Academia Aihua Zheng, Zi Wang , Zihan Chen , Chenglong Li , Jin Tang Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China {ahzheng214, ziwang1121, zhchen96, lcl1314}@foxmail.com, tangjin@ahu.edu.cn
Pseudocode No The paper describes the network architecture and various modules, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any specific link or statement indicating that the source code for the methodology is openly available or will be released.
Open Datasets No The paper introduces a new dataset, RGBNT201, stating 'we contribute a comprehensive benchmark dataset, RGBNT201'. While it's presented as a benchmark, no URL, DOI, or specific access instructions for this dataset are provided within the paper text.
Dataset Splits Yes We select 141 identities for training, 30 identities for validation, while the remaining 30 identities for testing.
Hardware Specification Yes The implementation platform is Pytorch with a NVIDIA GTX 1080Ti GPU.
Software Dependencies No The paper mentions 'Pytorch' but does not specify its version or any other software dependencies with version numbers (e.g., specific library versions or operating system).
Experiment Setup Yes The initial learning rate is set as 1e-3. Consequently, we increase the number of train iterations due to the small learning rate. The number of mini-batches is 8. In the training phase, we use Stochastic Gradient Descent (SGD) with the momentum of 0.9 and weight decay of 0.0005 to fine-tune the network.