Unsupervised Group Re-identification via Adaptive Clustering-Driven Progressive Learning

Authors: Hongxu Chen, Quan Zhang, Jian-Huang Lai, Xiaohua Xie

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on four popular G-Re ID datasets demonstrate that our method not only achieves stateof-the-art performance on unsupervised G-Re ID but also performs comparably to several fully supervised approaches.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China 2Pazhou Lab (Huang Pu), Guangdong 510000, China 3Guangdong Key Laboratory of Information Security Technology, Guangzhou 510006, China 4Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China {chenhx87, zhangq48}@mail2.sysu.edu.cn, {stsljh, xiexiaoh6}@mail.sysu.edu.cn
Pseudocode No The paper describes the steps of the proposed methods (GAC and GDPU) using descriptive text and equations, but it does not contain a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We evaluate our proposed method on four G-Re ID datasets: CSG (Yan et al. 2020), Road Group (Xiao et al. 2018), i LIDS MCTS (Zheng, Gong, and Xiang 2009) and SYSUGroup (Mei et al. 2020).
Dataset Splits No The paper mentions 'We randomly and equally split the training and test sets of i-LIDS MCTS and SYSU-Group according to the protocol (Lin et al. 2019).' and 'We follow the division of these two datasets for training and testing as described in (Zhang et al. 2022b).' It does not explicitly mention a separate validation split within the text.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, memory amounts, or detailed computer specifications used for running the experiments.
Software Dependencies No The paper mentions software like PP-YOLOE and Adam optimizer, but does not provide specific version numbers for any software dependencies required to replicate the experiment.
Experiment Setup Yes During training, we adopt Adam optimizer with weight decay 5e4. The initial learning rate is set to 3.5e-4 and reduced to 0.1 of its previous value every 10 epochs. For person Re ID, we train for 25 epochs with GDPU. At the beginning of each epoch, we first apply DBSCAN with the eps of 0.6 for pseudo-label assignment. We then use GAC to obtain group pseudo-labels and train for the G-Re ID task for 25 epochs with the same settings as above. We set the values of ϵc and Nc to 3 and 10 respectively.