Lifelong Person Re-identification by Pseudo Task Knowledge Preservation

Authors: Wenhang Ge, Junlong Du, Ancong Wu, Yuqiao Xian, Ke Yan, Feiyue Huang, Wei-Shi Zheng688-696

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superiority of our method 1 as compared with the stateof-the-art lifelong learning and LRe ID methods.
Researcher Affiliation Collaboration Wenhang Ge1,2,3,, Junlong Du2, Ancong Wu1,3,*, Yuqiao Xian1, Ke Yan2, Feiyue Huang2, Wei-Shi Zheng1,3,4 1 School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2 Youtu Lab, Tencent, 3 Pazhou Lab, Guangzhou, China, 4 Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China
Pseudocode No The paper describes the proposed framework and its components using mathematical formulas and textual descriptions, but does not include a distinct pseudocode or algorithm block.
Open Source Code Yes 1Code available at https://github.com/g3956/PTKP
Open Datasets Yes To evaluate the effectiveness of our method, we conducted experiments in the LRe ID setting as Gw FRe ID (Wu and Gong 2021) on benchmark person Re-ID datasets, of which four were used as sequential input datasets (i.e., Market-1501 (Zheng et al. 2015), Duke MTMC2 (Zheng, Zheng, and Yang 2017), CUHK-SYSU (Xiao et al. 2017) and MSMT17 (Wei et al. 2018)).
Dataset Splits No The paper defines D(s) train and D(s) test for training and testing respectively, and mentions using an exemplar memory bank for old tasks during replay. However, it does not explicitly describe a separate validation dataset split for hyperparameter tuning or early stopping on the current task.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running the experiments (e.g., GPU models, CPU types, or cloud instance specifications).
Software Dependencies No The paper mentions using 'Res Net-50' and 'Adam' as optimization algorithm, but does not specify programming languages, libraries, or their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The mini-batch size for both new task and replay task sampled from the exemplar memory bank was 128 per task. ... The learning rate was set to 0.00035 initially and decayed by 0.1 after 40th, 70th epochs in 80 epochs totally for training the first dataset. For subsequent tasks, the learning rate was set to 0.000035 initially and decayed by 0.1 in 30th epochs in 60 epochs totally. The weights for all loss functions were set to 1 for simplicity. τ was set to 0.5.