Grouping Attribute Recognition for Pedestrian with Joint Recurrent Learning
Authors: Xin Zhao, Liufang Sang, Guiguang Ding, Yuchen Guo, Xiaoming Jin
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive empirical evidence shows that our GRL model achieves state-of-the-art results, based on pedestrian attribute datasets, i.e. standard PETA and RAP datasets. |
| Researcher Affiliation | Academia | 1Beijing National Research Center for Information Science and Technology(BNRist) School of Software, Tsinghua University, Beijing 100084, China {zhaoxin19,yuchen.w.guo}@gmail.com, slf12thuss@163.com, {dinggg,xmjin}@tsinghua.edu.cn |
| Pseudocode | No | The paper describes the network architecture and mathematical formulations (e.g., LSTM equations) but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about making source code publicly available or a link to a code repository for the described methodology. |
| Open Datasets | Yes | For evaluations, we used the two largest publicly available pedestrian attribute datasets: (1) The PEdes Train Attribute (PETA) [Deng et al., 2014] dataset consists of 19000 person images collected from 10 small-scale person datasets. (2) The Richly Annotated Pedestrian (RAP) attribute dataset [Li et al., 2016a] has 41585 images drawn from 26 indoor surveillance cameras. |
| Dataset Splits | Yes | Following the same protocol as [Deng et al., 2015; Li et al., 2015], we divide the whole dataset into three nonoverlapping partitions: 9500 for model training, 1900 for verification, and 7600 for model evaluation. We adopt the same data split as in [Li et al., 2016a]: 33268 images for training and the remaining 8317 for test. |
| Hardware Specification | No | The paper mentions that the model is 'trained with tensorflow' but does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for the experiments. |
| Software Dependencies | No | The paper states 'Our model is trained with tensorflow' but does not provide specific version numbers for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | The optimization algorithm used in training the proposed model is SGD. The initial learning rate of training is 0.1 and reduced to 0.001 by a factor of 0.1 at last. |