LSTKC: Long Short-Term Knowledge Consolidation for Lifelong Person Re-identification
Authors: Kunlun Xu, Xu Zou, Jiahuan Zhou
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Consequently, experimental results show that our LSTKC exceeds the state-of-the-art methods by 6.3%/9.4% and 7.9%/4.5%, 6.4%/8.0% and 9.0%/5.5% average m AP/R@1 on seen and unseen domains under two different training orders of the challenging LRe ID benchmark respectively. |
| Researcher Affiliation | Academia | 1Wangxuan Institute of Computer Technology, Peking University 2School of Artificial Intelligence and Automation, Huazhong University of Science and Technology |
| Pseudocode | No | The paper describes its methods in text and uses diagrams (Figure 2) but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code for the methodology described, nor does it mention code release or repository links. |
| Open Datasets | Yes | We conducted all our experiments on the widely-used LRe ID benchmark (Pu et al. 2021), which consists of 12 datasets. Among them, five datasets (Market-1501 (Zheng et al. 2015), Duke MTMC-re ID (Ristani et al. 2016), CUHK-SYSU (Xiao et al. 2016), MSMT17-V2 (Wei et al. 2018), and CUHK03 (Li et al. 2014)) are seen datasets used for LRe ID training and anti-forgetting testing. |
| Dataset Splits | No | The paper specifies training epochs and general training data, but does not explicitly provide details about a dedicated validation split (percentages, counts, or methodology) for hyperparameter tuning or model selection. |
| Hardware Specification | Yes | All experiments are conducted on a single NVIDIA 4090 GPU. |
| Software Dependencies | No | The paper states 'Our implementation is based on Py Torch' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | For both training orders, the first dataset is trained for 80 epochs and the subsequent datasets are trained for 60 epochs using an SGD optimizer with a momentum of 0.9. The learning rate is set to 8e-3 initially with 0.1 decay at the 30th epoch. The input images are resized to 256x128 with random cropping, erasing, and horizontal flipping augmentation. The batch size is set to 128 with 32 identities and 4 images for each identity. The hyperparameter γ and τ are set to 1 and 0.1 respectively. |