Matching on Sets: Conquer Occluded Person Re-identification Without Alignment
Authors: Mengxi Jia, Xinhua Cheng, Yunpeng Zhai, Shijian Lu, Siwei Ma, Yonghong Tian, Jian Zhang1673-1681
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments over three widely used datasets (Market1501, Duke MTMC and Occluded-Duke MTMC) show that Mo S achieves superior re-ID performance. |
| Researcher Affiliation | Academia | 1 School of Electronic and Computer Engineering, Peking University, China 2 College of Computer Science, Sichuan University, China 3 Nanyang Technological University, Singapore 4 School of Electronics Engineering and Computer Science, Peking University, China 5 Peng Cheng Laboratory, China |
| Pseudocode | Yes | Algorithm 1 Matching on Sets (Mo S) |
| Open Source Code | No | The paper does not include an explicit statement about releasing the source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We evaluate Mo S over two occluded re-ID datasets Occluded-Duke MTMC (Miao et al. 2019) and P-ETHZ (Zhuo et al. 2018). We also evaluate Mo S over two widely used Holistic re-ID datasets Market-1501 (Zheng et al. 2015a) and Duke MTMC-re ID (Zheng, Zheng, and Yang 2017) to test its generalizability. |
| Dataset Splits | Yes | Occluded-Duke MTMC is a split of Duke MTMC-re ID (Zheng, Zheng, and Yang 2017) which contains 15,618 training images, 17,661 gallery images, and 2,210 occluded query images. The experiments on this dataset follow the standard setting (Miao et al. 2019). ... Following (Zhuo et al. 2018), we randomly select images of half identities for training and the rest for test. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used for running its experiments, such as GPU or CPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions "Py Torch" as the implementation framework but does not provide specific version numbers for it or any other key software dependencies. |
| Experiment Setup | Yes | During training, the input image is resized to 256 128 and augmented with random horizontal flipping, random erasing (Zhong et al. 2020) and random cropping. We warm up the model for 10 epochs with a linearly growing learning rate from 3.5 10 5 to 3.5 10 4, and then decrease it by a factor of 0.1 at 40th and 70th epoch. The batch size is set to 64 and the Adam optimizer is adopted in model training. We fixed ε = 0.1 and λ = 0.001 in experiments, and implement the networks on Py Torch. |