Relation-Aware Pedestrian Attribute Recognition with Graph Convolutional Networks
Authors: Zichang Tan, Yang Yang, Jun Wan, Guodong Guo, Stan Z. Li12055-12062
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on three benchmarks, including PA-100K, RAP, PETA attribute datasets, demonstrate the effectiveness of the proposed JLAC. |
| Researcher Affiliation | Collaboration | 1CBSR&NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3Institute of Deep Learning, Baidu Research, Beijing, China 4National Engineering Laboratory for Deep Learning Technology and Application, Beijing, China 5Faculty of Information Technology, Macau University of Science and Technology, Macau, China |
| Pseudocode | No | The paper describes the approach using mathematical formulas and textual descriptions, but it does not include structured pseudocode or algorithm blocks labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code or provide links to a code repository for the described methodology. |
| Open Datasets | Yes | We conduct experiments for pedestrian attribute recognition on three benchmark datasets: PA-100K (Liu et al. 2017), RAP (Li et al. 2018b) and PETA (Deng et al. 2014) datasets. |
| Dataset Splits | Yes | PA-100K dataset...divided into three subsets with 80,000, 10,000 and 10,000 images for training, validation and test, respectively. [...] PETA dataset...randomly split into 3 parts, where 9,500 images are used for training, 1,900 images for validation and the rest 7,600 images for test. |
| Hardware Specification | No | The paper mentions that 'All networks are optimized by Adam optimizer' and 'All networks are first pretrained on the Image Net', implying computational resources were used, but it does not specify exact hardware components like GPU or CPU models. |
| Software Dependencies | No | The paper mentions 'Adam optimizer (Kingma and Ba 2015)' and 'Res Net-50 (He et al. 2016)' but does not provide specific version numbers for any software, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | In our experiments, we adopt the image with the size of 256 128 as input. Before feeding the images to the network, all images are normalized by subtracting a mean and divide a standard deviation for each color channel. In the training stage, data augmentation is also employed to improve the performance. [...] All networks are first pretrained on the Image Net (Deng et al. 2009), and then finetuned on pedestrian attribute datasets. All networks are optimized by Adam optimizer (Kingma and Ba 2015) with β1 = 0.9, β2 = 0.999 and ϵ = 10 8. The learning rate is started with 0.0001 and reduced by a factor of 10 when the number of iterations increases. [...] The model performs best when d = 32, and it is used in other experiments. [...] The highest performance is achieved when v is set to 15, and this value is used in other experiments. [...] The model achieves the highest performance when λ1 = 1 and λ2 = 0.5, which are adopted in other experiments. |