Lifelong Person Re-identification via Knowledge Refreshing and Consolidation

Authors: Chunlin Yu, Ye Shi, Zimo Liu, Shenghua Gao, Jingya Wang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations show KRC s superiority over the state-of-the-art LRe ID methods on challenging pedestrian benchmarks. Experiment Datasets and Evaluation Protocal To evaluate our strategy s performance with LRe ID tasks, we assess our model on a challenging benchmark with four sequential datasets: VIPe R (Gray and Tao 2008), Market (Zheng et al. 2015), CUHK-SYSU (Xiao et al. 2016), and MSMT17 (Wei et al. 2018).
Researcher Affiliation Academia Chunlin Yu1, Ye Shi1,3, Zimo Liu2, Shenghua Gao1,3, Jingya Wang1,3* 1Shanghai Tech University 2Peng Cheng Laboratory 3Shanghai Engineering Research Center of Intelligent Vision and Imaging {yuchl, shiye, gaoshh, wangjingya}@shanghaitech.edu.cn; liuzm@pcl.ac.cn
Pseudocode Yes Algorithm 1: KRKC Input: Incoming datasets Dt train, Buffer Mt, Learning rate γ, η. Parameter: Parameters of working model: Θw t ; Parameters of memory model: Θm t . 1: for epoch in 1 emax do 2: for {xt i, yt i}Nb i=1 in Dt train do 3: Sample a mini-batch {xr i , yr i }Nb i=1 from Mt. 4: Knowledge rehearsal for anti-forgetting by Eq. (1). 5: Knowledge rehearsal for adaptation by Eq. (2), Eq. (3). 6: Calculate overall loss Lw by Eq. (4). 7: Update Θw t by gradient descent: 8: ϑ ϑ γ ϑLw for ϑ in Θw t . 9: Knowledge refreshing for calibration by Eq.(5). 10: Knowledge refreshing for memorization by Eq.(6), Eq. (7). 11: Calculate overall loss Lm by Eq. (8). 12: Update Θm t by gradient descent: 13: ϑ ϑ η ϑLm for ϑ in Θm t . 14: end for 15: end for 16: Model space consolidation by Eq. (9). 17: Feature space consolidation by Eq. (10). 18: Update exemplar memory: Mt Mt+1. 19: return updated model parameters Θw t+1, Θm t+1.
Open Source Code Yes Code is available at https://github.com/cly234/LRe ID-KRKC.
Open Datasets Yes To evaluate our strategy s performance with LRe ID tasks, we assess our model on a challenging benchmark with four sequential datasets: VIPe R (Gray and Tao 2008), Market (Zheng et al. 2015), CUHK-SYSU (Xiao et al. 2016), and MSMT17 (Wei et al. 2018).
Dataset Splits No The paper states 'At each step t, Dt = {Dt train, Dt test} contains the training set and testing set.' and describes evaluation on test sets. However, it does not explicitly provide details about a distinct validation dataset split or its size/percentage, only mentioning train and test sets.
Hardware Specification No The paper does not provide any specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using a 'ResNet50 model' and 'Adam for optimization' but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We use a Res Net50 model pre-trained on Image Net as our backbone. Global average pooling is replaced with generalized mean pooling. All person images are resized to 256 128. The batch size for the current task and the exemplar task is set to 128... We use Adam for optimization and train each task for 60 epochs. The learning rate is initialized at 3.5 10 4, which is then decreased by 0.1 after the 40th epoch for the first task... The weights for all losses are set to 1 and the temperature T for distillation is set to 2.