Catalyst for Clustering-Based Unsupervised Object Re-identification: Feature Calibration

Authors: Huafeng Li, Qingsong Hu, Zhanxuan Hu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section first provides the experimental details, then validates the significant benefits of FCM for clustering-based unsupervised object Re ID, and highlights the impact on the representation space and clustering precision. Experimental Results We conduct a performance comparison between CC-FCMVi T and several state-of-the-art methods on three object Re ID datasets.
Researcher Affiliation Academia 1School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China 2School of Information Science and Technology, Yunnan Normal University, Kunming 650500, China
Pseudocode Yes Algorithm 1: Improved clustering-based unsupervised object Re ID with feature calibration
Open Source Code Yes Code is available at: https://github.com/lhf12278/FCM-Re ID.
Open Datasets Yes The models are evaluated and compared on three widely-used benchmark object Re ID datasets, which including Market-1501 (Zheng et al. 2015a) and MSMT17 (Wei et al. 2018), and Duke MTMC-re ID (Zheng, Zheng, and Yang 2017).
Dataset Splits No The paper specifies training and test set sizes for datasets (e.g., "The training set includes 751 people...and the remaining images are test set."), but it does not explicitly mention a separate validation dataset split or how it was used.
Hardware Specification Yes We conduct all experiments using a single NVIDIA Ge Force RTX 3090.
Software Dependencies No The paper mentions software components like "Vi T", "DBSCAN", and "Stochastic Gradient Descent (SGD)", but it does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The images of pedestrians are randomly cropped to 256 128. Then images undergo random flipping and erasing. The training epoch is 50 and the batch size is 256. The learning rate is initialized to 3.5e 4 and reduced by a factor of 10 every 20 epochs; the weight-decay of the optimizer is 5e 4. The hyper-parameter ϵ of DBSCAN is fixed as 0.6, 0.8 and 0.7 on Market-1501, Duke MTMC-re ID and MSMT17. The momentum parameter used in Eq. (10) is 0.2.