GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition
Authors: Hanqing Chao, Yiwei He, Junping Zhang, Jianfeng Feng8126-8133
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that under normal walking conditions, our single-model method achieves an average rank-1 accuracy of 95.0% on the CASIA-B gait dataset and an 87.1% accuracy on the OU-MVLP gait dataset. These results represent new state-of-the-art recognition accuracy. |
| Researcher Affiliation | Academia | Hanqing Chao,1 Yiwei He,1 Junping Zhang,1 Jianfeng Feng2 1Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science 2Institute of Science and Technology for Brain-inspired Intelligence Fudan University, Shanghai 200433, China {hqchao16, heyw15, jpzhang, jffeng}@fudan.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code has been released at https://github.com/Abner Hq C/Gait Set. |
| Open Datasets | Yes | CASIA-B dataset (Yu, Tan, and Tan 2006) is a popular gait dataset. OU-MVLP dataset (Takemura et al. 2018b) is so far the world s largest public gait dataset. |
| Dataset Splits | Yes | In ST, the first 24 subjects (labeled in 001-024) are used for training and the rest 100 subjects are leaved for test. In MT, the first 62 subjects are used for training and the rest 62 subjects are leaved for test. In LT, the first 74 subjects are used for training and the rest 50 subjects are leaved for test. |
| Hardware Specification | Yes | The models are trained with 8 NVIDIA 1080TI GPUs. |
| Software Dependencies | No | The paper mentions 'Adam is chosen as an optimizer' but does not provide specific software library names with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The set cardinality in the training is set to be 30. Adam is chosen as an optimizer (Kingma and Ba 2015). The number of scales S in HPM is set as 5. The margin in BA+ triplet loss is set as 0.2. ... mini-batch is composed by the manner introduced in Sec. 3.5 with p = 8 and k = 16. ...The learning rate is set to be 1e 4. |