Generalising without Forgetting for Lifelong Person Re-Identification
Authors: Guile Wu, Shaogang Gong2889-2897
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on eight Re-ID benchmarks, CIFAR-100 and Image Net show the superiority of Gw FRe ID over the state-of-the-art methods. |
| Researcher Affiliation | Academia | Guile Wu, Shaogang Gong Queen Mary University of London guile.wu@qmul.ac.uk, s.gong@qmul.ac.uk |
| Pseudocode | Yes | Algorithm 1 Gw FRe ID for Lifelong Person Re-ID. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | We conducted extensive experiments on eight person Re-ID benchmarks and two image classification datasets. (1) Although our method is designed for lifelong person Re-ID, it would be interesting to evaluate our method for non Re-ID tasks. Thus, we employed CIFAR100 (Krizhevsky and Hinton 2009) and Image Net (Russakovsky et al. 2015) to evaluate the incremental learning performance for image classification. (2) We used four large-scale Re-ID benchmarks (Market-1501 (Zheng et al. 2015), Duke MTMCRe ID (Zheng, Zheng, and Yang 2017), CUHK-SYSU person search (Xiao et al. 2017) and MSMT17 (Wei et al. 2018)) as sequential input datasets to mimic the lifelong learning process (4 phases). (3) We further tested the model (after training with all 4 phases) on four new Re-ID datasets (CUHK03 (Li et al. 2014), i LIDS (Zheng, Gong, and Xiang 2009), VIPe R (Gray and Tao 2008) and 3DPe S (Baltieri, Vezzani, and Cucchiara 2011)) to evaluate its lifelong generalised Re-ID performance. |
| Dataset Splits | Yes | CIFAR-100 consists of 60000 images in 100 classes, with 500 training images and 100 testing images per class. Image Net with 1000 classes from ILSVRC 2012 (Russakovsky et al. 2015) contains 1.2 million training images and 50000 validation images. On CUHK03, we used the traditional training/testing splits for 20 trials, while on the other benchmarks, we employed the random half training/testing splits for 10 trials. The Re-ID evaluation statistics are summarised in Table 1. Following (Hou et al. 2019), an identical random seed (1993) by Num Py was used for class splitting. |
| Hardware Specification | Yes | We implemented the proposed method using Python 3.6 and Py Torch 0.4, and trained it on NVIDIA TESLA GPUs. |
| Software Dependencies | Yes | We implemented the proposed method using Python 3.6 and Py Torch 0.4, and trained it on NVIDIA TESLA GPUs. |
| Experiment Setup | Yes | In each lifelong learning phase, we trained the model with 60 epochs for continual learning and 30 epochs for rebalanced learning (i.e. set θ = 60 in Eq. (5) and emax = 90). We used SGD as the optimiser with momentum 0.9 and weight decay 5e-4. We set the initial learning rates to 0.01 for the feature extractor and 0.1 for the classification layers, which decayed by 0.1 after {40, 75} epochs. We set batch size to 32, K=2 to construct the exemplar memory, λ=10 and α=0.5 in Eq. (8) to balance representation learning, γ = 2 in Eq. (4), T = 2 in Eqs. (6) and (7) to generate soft distribution. On image classification, we used Res Net32 and Res Net-18 for CIFAR-100 and Image Net, respectively. We set batch size to 128, K=20, β = 0.1T 2, and applied φ(x) λ. For CIFAR-100, we trained the model with θ=160 epochs for continual learning and set emax=200. We set the initial learning rate to 0.1, which decayed by 0.1 after {80, 120, 180} epochs. For Image Net, we trained the model with θ=90 epochs for continual learning and set emax=112. We set the initial learning rate to 0.1, which decayed by 0.1 after {30, 60, 100, 110} epochs. |