Viewpoint-Aware Loss with Angular Regularization for Person Re-Identification

Authors: Zhihui Zhu, Xinyang Jiang, Feng Zheng, Xiaowei Guo, Feiyue Huang, Xing Sun, Weishi Zheng13114-13121

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on the Market1501 and Duke MTMC-re ID datasets demonstrated that our method outperforms the state-of-the-art supervised Re-ID methods.
Researcher Affiliation Collaboration 1Tencent You Tu Lab, Shanghai, China 2Sun Yat-sen University, Guangzhou, China 3Southern University of Science and Technology, Shenzhen, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes 1Available at https://github.com/zzhsysu/VA-Re ID (Section 4.1 Datasets), 2Available at https://github.com/zzhsysu/VA-Re ID (Section 4.5 Further Evaluations)
Open Datasets Yes We annotate the viewpoint label1 of two widely used benchmarks including Market-1501 and Duke MTMC-re ID.
Dataset Splits No The paper specifies training and testing sets, but does not explicitly provide details for a validation set split. It only mentions 'training set' and 'testing data'.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for experiments.
Software Dependencies No The paper mentions using 'The Se Resnext model' and 'Adam optimizer' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We resize images to 384 128 as in many re-ID systems. In training stage, we set batch size to be 64 by sampling 16 identities and 4 images per identity. The Se Resnext model with the pretrained parameters on Image Net is considered as the backbone network. Some common data augmentation strategies include horizontal flipping, random cropping, padding, random erasing (with a probability of 0.5) are used. We adopt Adam optimizer to train our model and set weight decay 5 10 4. The total number of epoch is 200 and the epoch milestones are 50, 100, 160. The learning rate is initialized to 3.5 10 4 and is decayed by a factor of 0.1 when the epoch get the milestones. At the beginning, we warm up the models for 10 epochs and the learning rate grows linearly from 3.5 10 5 to 3.5 10 4. The parameters in the loss function are set as follows: β = 1, α = 0.2.