Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Divide-and-Regroup Clustering for Domain Adaptive Person Re-identification
Authors: Zhengdong Hu, Yifan Sun, Yi Yang, Jianguang Zhou980-988
AAAI 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive experiments under seven domain adaptive re-ID scenarios and demonstrate consistent improvement on several popular UDA methods. Based on a recent UDA method, DARC advances the state of the art (e.g, 85.1% m AP on MSMT-to Market and 83.1% m AP on Person X-to-Market). |
| Researcher Affiliation | Collaboration | Zhengdong Hu1,2 *, Yifan Sun2, Yi Yang3, Jianguang Zhou1 1 Research Center for Analytical Instrumentation, Institute of Cyber-Systems and Control, State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, China 2 Baidu Research, China 3 Re LER, Centre for Artificial Intelligence, University of Technology Sydney, Australia |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We evaluate the proposed DARC on different cross-domain scenes with two real person datasets, i.e., Market1501 (Zheng et al. 2015), MSMT17 (Wei et al. 2018) and two synthetic person dataset Person X (Sun and Zheng 2019), Unreal Person (Zhang et al. 2021). |
| Dataset Splits | No | The paper describes the construction of mini-batches for training but does not provide explicit train/validation/test dataset splits or their sizes/percentages for reproducibility of the partitioning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper mentions 'DBSCAN (Ester et al. 1996)' and 'Adam' but does not provide specific version numbers for software libraries, programming languages, or other dependencies. |
| Experiment Setup | Yes | We construct each mini-batch with 64 source images (from 16 identities) and 64 target images (from 16 pseudo identities). Correspondingly, the batch size is 128. We resize the image size to 256 128 and utilize random flipping, random padding and random erasing (Zhong et al. 2017b) for data augmentation. The essential clustering method for the local clustering and global clustering in DARC is DBSCAN (Ester et al. 1996). The training optimizer is Adam with 5 10 4 weight decay. |