RLE: A Unified Perspective of Data Augmentation for Cross-Spectral Re-Identification
Authors: Lei Tan, Yukang Zhang, Keke Han, Pingyang Dai, Yan Zhang, Yongjian Wu, Rongrong Ji
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results not only demonstrate the superiority and effectiveness of RLE but also confirm its great potential as a general-purpose data augmentation for cross-spectral re-identification. |
| Researcher Affiliation | Collaboration | 1Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China. 2Tencent Youtu Lab, China. |
| Pseudocode | Yes | Algorithm 1: Radical Random Linear Enhancement |
| Open Source Code | Yes | The code is available at https://github.com/stone96123/RLE. |
| Open Datasets | Yes | We conduct experiments on two publicly available visible-infrared person re-identification datasets SYSU-MM01 [31] and Reg DB [32]. |
| Dataset Splits | Yes | SYSU-MM01 is a large-scale dataset... The training set contains 395 identities with 22, 258 visible images and 11, 909 infrared images, while the testing set includes 96 identities with 3, 803 infrared images as the query. Following the evaluation protocol of previous works [33, 34], we choose half of the identities at random for training and the other half for testing. |
| Hardware Specification | Yes | We use Pytorch to implement our method and finish all the experiments on a single RTX 3090 GPU. |
| Software Dependencies | No | We use Pytorch to implement our method and finish all the experiments on a single RTX 3090 GPU, but no specific version number for PyTorch or other libraries is provided. |
| Experiment Setup | Yes | The mini-batch size is set to 48. For each mini-batch, we randomly select 4 identities, each with 6 visible images and 6 infrared images. We resize all of the images to 384 × 192 and use random flipping as basic data augmentation. The initial learning rate is set to 0.1, and decayed by 0.1 and 0.01 at 20, and 50 epochs. Following previous works[38, 11, 39], we apply a warm-up strategy in the first 10 epochs. |