Dual Distribution Alignment Network for Generalizable Person Re-Identification

Authors: Peixian Chen, Pingyang Dai, Jianzhuang Liu, Feng Zheng, Mingliang Xu, Qi Tian, Rongrong Ji1054-1062

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method in a large-scale DG Re-ID benchmark and compare it with various cutting-edge DG approaches. Quantitative results show that DDAN achieves state-of-the-art performance. and 4 Experiments section with sub-sections like 4.2 Comparison with State-of-the-Arts and 4.5 Ablation Study.
Researcher Affiliation Collaboration Peixian Chen1, Pingyang Dai1 , Jianzhuang Liu3, Feng Zheng2, Mingliang Xu4, Qi Tian5, Rongrong Ji1,6 1 Media Analytics and Computing Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University 2 Department of Computer Science and Engineering, Southern University of Science and Technology 3 Noah s Ark Lab, Huawei Tech 4 School of Information Engineering, Zhengzhou University 5 Cloud & AI, Huawei Tech. 6 Institute of Artificial Intelligence, Xiamen University
Pseudocode No The paper describes its method using textual explanations and mathematical equations, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We conduct experiments on the large-scale DG Re-ID benchmark (Song et al. 2019) to evaluate our DG model for person Re-ID. Specifically, CUHK02 (Li and Wang 2013), CUHK03 (Li et al. 2014), Market-1501 (Zheng et al. 2015), Duke MTMC-Re ID (Zheng, Zheng, and Yang 2017) and CUHK-SYSU Person Search (Xiao et al. 2016) are taken as the source datasets.
Dataset Splits No The paper states that "All images in these source datasets, regardless of their train/test splits, are used for training, in total 121,765 images of 18,530 identities." This indicates that the entire source data is used for training, and no explicit train/validation/test splits are provided for the training data itself.
Hardware Specification Yes We implement our model with Py Torch and train it on a single 1080-Ti GPU.
Software Dependencies No The paper mentions "Py Torch" as the implementation framework but does not provide a specific version number for it or any other software dependencies.
Experiment Setup Yes The learning rate is initially set to 0.1 and multiplied by 0.1 per 40 epochs. Our domain discriminator Dθd consists of a 128-D and a 2-D fully connected (FC) layers with batch normalization (BN), while the identity discriminator Iθi is a 18,530-D (i.e., the number of identities) FC layer with BN. The updating rate α in Eq. (7) is set to 0.05. The triplet loss margin in Eq. (2) is 0.3. The τ of softmax in Eq. (8) is 2 10 3. The weights of the losses in Eq. (9) are set to λ1 = 1.0, λ2 = 0.18 and λ3 = 0.05. The model is trained for 100 epochs with a batch size of 64 (each identity comes with 4 images). We enable LSE after the 4th epoch to stabilize learned representations.