TOP-ReID: Multi-Spectral Object Re-identification with Token Permutation
Authors: Yuhao Wang, Xuehu Liu, Pingping Zhang, Hu Lu, Zhengzheng Tu, Huchuan Lu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three Re ID benchmarks (i.e., RGBNT201, RGBNT100 and MSVR310) verify the effectiveness of our methods. |
| Researcher Affiliation | Academia | 1School of Future Technology, School of Artificial Intelligence, Dalian University of Technology 2School of Computer Science and Artificial Intelligence, Wuhan University of Technology 3School of Computer Science and Communication Engineering, Jiangsu University 4School of Computer Science and Technology, Anhui University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/924973292/TOP-Re ID. |
| Open Datasets | Yes | RGBNT201 (Zheng et al. 2021) is the first multi-spectral person Re ID dataset with RGB, NIR and TIR spectra. RGBNT100 (Li et al. 2020b) is a large-scale multi-spectral vehicle Re ID dataset. MSVR310 (Zheng et al. 2022) is a small-scale multi-spectral vehicle Re ID dataset with more complex scenarios. |
| Dataset Splits | No | The paper mentions using datasets for evaluation but does not specify the training, validation, and test splits with percentages or sample counts. |
| Hardware Specification | Yes | We conduct experiments with one NVIDIA A800 GPU. |
| Software Dependencies | No | The paper mentions "Py Torch toolbox" but does not specify a version number for it or any other software dependencies. |
| Experiment Setup | Yes | All images are resized to 256 128 3 pixels. When training, random horizontal flipping, cropping and erasing (Zhong et al. 2020) are used as data augmentation. We set the mini-batch size to 128. Each mini-batch consists of 8 randomly selected object identities, and 16 images are sampled for each identity. We use the Stochastic Gradient Descent (SGD) optimizer with a momentum coefficient of 0.9 and a weight decay of 0.0001. Furthermore, the learning rate is initialized as 0.009. The warmup strategy and cosine decay are used during training. |