Cross-Covariate Gait Recognition: A Benchmark

Authors: Shinan Zou, Chao Fan, Jianbo Xiong, Chuanfu Shen, Shiqi Yu, Jin Tang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To bridge this gap, we undertake an arduous 20-month effort to collect a cross-covariate gait recognition (CCGR) dataset. The CCGR dataset has 970 subjects and about 1.6 million sequences; almost every subject has 33 views and 53 different covariates. ... We have conducted extensive experiments. Our main results show: 1) Cross-covariate emerges as a pivotal challenge for practical applications of gait recognition. 2) Parsing Gait demonstrates remarkable potential for further advancement. 3) Alarmingly, existing SOTA methods achieve less than 43% accuracy on the CCGR, highlighting the urgency of exploring cross-covariate gait recognition.
Researcher Affiliation Academia 1School of Automation, Central South University 2Department of Computer Science and Engineering, Southern University of Science and Technology 3Research Institute of Trustworthy Autonomous System, Southern University of Science and Technology 4The University of Hong Kong
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Link: https://github.com/Shinan Zou/CCGR.
Open Datasets Yes To bridge this gap, we undertake an arduous 20-month effort to collect a cross-covariate gait recognition (CCGR) dataset. The CCGR dataset has 970 subjects and about 1.6 million sequences; almost every subject has 33 views and 53 different covariates. ... CCGR will be made publicly available for research purposes.
Dataset Splits Yes Subjects are labeled from 1 to 1000. Subjects 134 to 164 are missing. Subjects 1 to 600 are used for training, and the rest are used for testing.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions software tools like QANet and HRNet, and optimizers like Adam and SGD, but it does not specify version numbers for any software dependencies.
Experiment Setup Yes The batch size is 8 16 30, where 8 denotes the number of subjects, 16 denotes the number of training samples per subject, and 30 is the number of frames. The optimizer is Adam. The number of iterations is 320K. The learning rate starts at 1e-4 and drops to 1e-5 after 200K iterations. For Gait Base and Deep Gait V2: The optimizer is SGD. The number of iterations is 240K. The learning rate starts at 1e-1 and drops by 1/10 at 100k, 140k, and 170k.