Visual-Semantic Graph Reasoning for Pedestrian Attribute Recognition

Authors: Qiaozhe Li, Xin Zhao, Ran He, Kaiqi Huang8634-8641

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the proposed framework on three large scale pedestrian attribute datasets including PETA, RAP, and PA100k. Experiments show superiority of the proposed method over state-of-the-art methods and effectiveness of our joint GCN structures for sequential attribute prediction.
Researcher Affiliation Academia 1CRISE, CASIA, 2CRIPAC & NLPR, CASIA 3University of Chinese Academy of Sciences 4CAS Center for Excellence in Brain Science and Intelligence Technology
Pseudocode No The paper describes the model architecture and operations using mathematical equations and textual descriptions, but it does not contain a formal pseudocode block or algorithm.
Open Source Code No The paper does not include an explicit statement about releasing its source code or provide a link to a code repository.
Open Datasets Yes The proposed method is evaluated on three publicly available pedestrian attribute datasets: (1) The PEdes Trian Attribute (PETA) dataset (Deng et al. 2014) (2) The Richly Annotated Pedestrian (RAP) attribute dataset (Li et al. 2016a) (3) The PA-100k Dataset (Liu et al. 2017b)
Dataset Splits Yes The PEdes Trian Attribute (PETA) dataset (Deng et al. 2014) consists of 19, 000 person images collected from 10 smallscale person datasets. ... The whole dataset is randomly divided into three non-overlapping partitions: 9500 for training, 1900 for verification, and 7600 for evaluation. ... The PA-100k Dataset (Liu et al. 2017b) consists of 100,000 pedestrian images from 598 outdoor scenes. ... The whole dataset is split into training, validation and test sets with a ratio of 8:1:1 (Liu et al. 2017b).
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions 'pytorch' as the implementation framework but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes Stochastic gradient descend algorithm (Sutskever et al. 2013) is employed for training, with momentum of 0.9, and weight decay of 0.0005. The batch size is set to 32. The initial learning rate is set to 10 3 for the first 20 epoches, and decreased to 10 4 for the second 20 epoches.