Debiased Batch Normalization via Gaussian Process for Generalizable Person Re-identification
Authors: Jiawei Liu, Zhipeng Huang, Liang Li, Kecheng Zheng, Zheng-Jun Zha1729-1737
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our GDNorm effectively improves the generalization ability of the model on unseen domain. |
| Researcher Affiliation | Academia | Jiawei Liu1*, Zhipeng Huang1*, Liang Li2, Kecheng Zheng1, Zheng-Jun Zha1 1 University of Science and Technology of China 2 Institute of Computing Technology, Chinese Academy of Sciences |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about open-source code release or a link to a code repository. |
| Open Datasets | Yes | As shown in Tab. 2, source domains include CUHK02 (Li and Wang 2013), CUHK03 (Li et al. 2014), Market-1501 (Zheng et al. 2015), Duke MTMC (Zheng, Zheng, and Yang 2017) and CUHK-SYSU (Xiao et al. 2017). |
| Dataset Splits | No | The paper states 'All training sets and testing sets in the source domains are used for model training' and describes a 'leave-one-out setting' where 'three datasets as the source domains for training and the remaining one as the target domain for testing,' but it does not specify explicit validation splits (e.g., percentages or counts) within these datasets, nor does it explicitly mention a dedicated validation set. |
| Hardware Specification | Yes | All experiments are conducted on a single NVIDIA Titan XP GPU. |
| Software Dependencies | No | The paper mentions using ResNet50 and automatic mixed-precision training, but it does not provide specific version numbers for any software dependencies or libraries required to replicate the experiments. |
| Experiment Setup | Yes | Images are resized to 384 × 128, and the training batch size is set to 128, including 8 identities and 16 images per identity. For data augmentation, we use random flipping, random cropping and color jittering. We train the model for 60 epochs. The learning rate is initialized as 3.5 × 10−4 and divided by 10 at 40th epochs, weight dacay is 5 × 10−4. λ in Eq. 11 is set to 0.6. |