Adversarial Attribute-Image Person Re-identification

Authors: Zhou Yin, Wei-Shi Zheng, Ancong Wu, Hong-Xing Yu, Hai Wan, Xiaowei Guo, Feiyue Huang, Jianhuang Lai

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments on three attribute datasets and demonstrated that the regularized adversarial modelling is so far the most effective method for the attributeimage cross-modality person Re-ID problem.
Researcher Affiliation Collaboration School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China 2 You Tu Lab, Tencent 3 Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements about open-sourcing code or links to a code repository.
Open Datasets Yes We evaluate our approach and compare with related methods on three benchmark datasets, including Duke Attribute [Lin et al., 2017], Market Attribute [Lin et al., 2017], and PETA [Deng et al., 2014].
Dataset Splits No The paper specifies training and testing sets but does not explicitly mention a separate validation dataset split. For example: 'The Duke Attribute dataset contains 16522 images for training, and 19889 images for testing.'
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., Python version, library versions like TensorFlow or PyTorch).
Experiment Setup Yes We first pre-trained our image network for 100 epochs using the semantic ID, with an adam optimizer [Kingma and Ba, 2015] with learning rate 0.01, momentum 0.9 and weight decay 5e-4. After that, we jointly train the whole network. We set λG in Eq. (2) as 0.001, and λD as 0.5... The total epoch was set to 300. During training, we set the learning rate of the attribute branch to 0.01, and set the learning rate of the image branch to 0.001... The batch size of training is 128 and the setting of optimizer is the same as that of pre-training.