Generalizable Person Re-identification via Self-Supervised Batch Norm Test-Time Adaption

Authors: Ke Han, Chenyang Si, Yan Huang, Liang Wang, Tieniu Tan817-825

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the effectiveness of our method, we conduct extensive experiments on three re-id datasets and confirm the superior performance to the stateof-the-art methods.
Researcher Affiliation Academia 1 Center for Research on Intelligent Perception and Computing, Institute of Automation, Chinese Academy of Sciences 2 School of Future Technology, University of Chinese Academy of Sciences (UCAS) 3 School of Artificial Intelligence, University of Chinese Academy of Sciences (UCAS) 4 Center for Excellence in Brain Science and Intelligence Technology (CEBSIT) 5 Chinese Academy of Sciences, Artificial Intelligence Research (CAS-AIR)
Pseudocode Yes Algorithm 1: Part nearest neighbor pairing
Open Source Code No The paper does not provide any specific links or statements about the availability of its source code.
Open Datasets Yes we construct the training set by mixing five source domains: CUHK02 (Li and Wang 2013), CUHK03 (Li et al. 2014), Market-1501 (Zheng et al. 2015), Duke MTMC-Re ID (Zheng, Zheng, and Yang 2017) and CUHK-SYSU Person Search (Xiao et al. 2016). All images in the source domains are used for training regardless of train or test splits, covering 121, 765 images of 18, 530 identities in total. The test sets include VIPe R (Gray and Tao 2008), GRID (Loy, Xiang, and Gong 2009) and i LIDS (Zheng, Gong, and Xiang 2009). Our model is pre-trained on Image Net (Deng et al. 2009)
Dataset Splits No The paper does not explicitly define traditional training/validation/test splits with percentages or sample counts for its datasets. It mentions training on a mix of source domains and testing on distinct target domains, with results averaged over 10 random splits for test sets, but no dedicated validation set is specified.
Hardware Specification Yes All the experiments are conducted on a single NVIDIA Titan Xp GPU with Pytorch.
Software Dependencies No The paper mentions 'Pytorch' but does not specify a version number or list other software dependencies with version numbers.
Experiment Setup Yes The learning rate ηt is initialized at 0.005, and decayed by 10 after 40 epochs. The batch size is set to 64. Other hyper-parameters are set as follows: the number of stripes H=6, the weight factor λ1=λ2=0.1, λ3=1, the learning rate ηtta=0.0005, the margin φ =0.3, the dimension C=2048, Cl=256.