RaSa: Relation and Sensitivity Aware Representation Learning for Text-based Person Search
Authors: Yang Bai, Min Cao, Daming Gao, Ziqiang Cao, Chen Chen, Zhenfeng Fan, Liqiang Nie, Min Zhang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that Ra Sa outperforms existing state-of-the-art methods by 6.94%, 4.45% and 15.35% in terms of Rank@1 on CUHK-PEDES, ICFG-PEDES and RSTPReid datasets, respectively. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, Soochow University 2Institute of Automation, Chinese Academy of Sciences 3Institute of Computing Technology, Chinese Academy of Sciences 4Harbin Institute of Technology, Shenzhen |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at: https://github.com/Flame-Chasers/Ra Sa. |
| Open Datasets | Yes | We conduct experiments on three text-based person search datasets: CUHK-PEDES [Li et al., 2017], ICFG-PEDES [Ding et al., 2021] and RSTPReid [Zhu et al., 2021]. |
| Dataset Splits | No | The paper mentions evaluating on 'test images' and discusses dataset usage, but it does not provide specific train/validation/test dataset split percentages, sample counts, or explicit citations for predefined splits in the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like ALBEF, TCL, CLIP, BERT, and Distil BERT, but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | The best result is achieved at pw = 0.1. ... Ra Sa performs best at pm = 0.3. ... Empirical results show that Ra Sa performs best when they are set as 0.5. |