Decentralised Learning from Independent Multi-Domain Labels for Person Re-Identification

Authors: Guile Wu, Shaogang Gong2898-2906

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on ten Re-ID benchmarks show that Fed Re ID achieves compelling generalisation performance beyond any locally trained models without using shared training data, whilst inherently protects the privacy of each local client.
Researcher Affiliation Academia Guile Wu, Shaogang Gong Queen Mary University of London guile.wu@qmul.ac.uk, s.gong@qmul.ac.uk
Pseudocode No The paper includes equations and describes steps, but does not present a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not provide any explicit statements or links indicating that source code for the described methodology is publicly available.
Open Datasets Yes We used four large-scale Re-ID datasets (Duke MTMC-Re ID (Zheng, Zheng, and Yang 2017), Market-1501 (Zheng et al. 2015), CUHK03 (Li et al. 2014; Zhong et al. 2017) and MSMT17 (Wei et al. 2018)) as nonshared local datasets in four client sites... The Fed Re ID model was then evaluated on five smaller Re-ID datasets (VIPe R (Gray and Tao 2008), i LIDS (Zheng, Gong, and Xiang 2009), 3DPe S (Baltieri, Vezzani, and Cucchiara 2011), CAVIAR (Cheng et al. 2011) and GRID (Loy, Liu, and Gong 2013)), plus a large-scale Re-ID dataset (CUHK-SYSU person search (Xiao et al. 2017)) as new unseen target domains for out-of-the-box deployment tests. ... Besides, we used CIFAR-10 (Krizhevsky and Hinton 2009) for federated formulation generalisation analysis on image classification.
Dataset Splits No The paper mentions "10 training/testing splits" and discusses evaluation metrics and hyperparameters, but does not explicitly specify a validation set split or methodology for hyperparameter tuning using a validation set. For example, it says "We empirically set batch size to 32..." which implies empirical setting rather than validation set optimization.
Hardware Specification Yes Our models were implemented with Python(3.6) and Py Torch(0.4), and trained on TESLA V100 GPU (32GB).
Software Dependencies Yes Our models were implemented with Python(3.6) and Py Torch(0.4), and trained on TESLA V100 GPU (32GB).
Experiment Setup Yes We empirically set batch size to 32, maximum global communication epochs kmax=100, maximum local steps tmax=1, and temperature T=3. We used SGD as the optimiser with Nesterov momentum 0.9 and weight decay 5e-4. The learning rates were set to 0.01 for embedding networks and 0.1 for mapping networks, which decayed by 0.1 every 40 epochs.