Recurrent Attention Model for Pedestrian Attribute Recognition

Authors: Xin Zhao, Liufang Sang, Guiguang Ding, Jungong Han, Na Di, Chenggang Yan9275-9282

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive empirical evidence shows that our recurrent model frameworks achieve state-of-the-art results, based on pedestrian attribute datasets, i.e. standard PETA and RAP datasets. and Experiment Datasets For evaluations, we use the two largest publicly available pedestrian attribute datasets: (1) The PEdes Train Attribute (PETA)... (2) The Richly Annotated Pedestrian (RAP)... and Results. The experiment results of our method and competitors are in Tab.3.
Researcher Affiliation Academia 1Beijing National Research Center for Information Science and Technology(BNRist) School of Software, Tsinghua University, Beijing 100084, China 2School of Computing & Communications, Lancaster University, UK 3Institute of Information and Control Hangzhou Dianzi University, Hangzhou, China
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links indicating that the source code for the described methodology is open-source or publicly available.
Open Datasets Yes For evaluations, we use the two largest publicly available pedestrian attribute datasets: (1) The PEdes Train Attribute (PETA) (Deng et al. 2014) dataset consists of 19000 person images... (2) The Richly Annotated Pedestrian (RAP) attribute dataset (Li et al. 2016a) has 41585 images...
Dataset Splits Yes Following the same protocol as (Deng et al. 2015; Li, Chen, and Huang 2015), we divide the whole dataset into three non-overlapping partitions: 9500 for model training, 1900 for verification, and 7600 for model evaluation.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU/CPU models or memory specifications.
Software Dependencies No The paper mentions 'tensorflow' but does not specify a version number or other software dependencies with specific versions.
Experiment Setup Yes The optimization algorithm used in training the proposed model is Adam. The initial learning rate of training is 0.1 and reduced to 0.001 by a factor of 0.1 at last.