QAGait: Revisit Gait Recognition from a Quality Perspective
Authors: Zengbin Wang, Saihui Hou, Man Zhang, Xu Liu, Chunshui Cao, Yongzhen Huang, Peipei Li, Shibiao Xu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate our QAGait can guarantee both gait reliability and performance enhancement. |
| Researcher Affiliation | Collaboration | 1Beijing University of Posts and Telecommunications 2Beijing Normal University 3Watrix AI {wzb1, zhangman, lipeipei, shibiaoxu}@bupt.edu.cn, {housaihui, huangyongzhen}@bnu.edu.cn, {xu.liu, chunshui.cao}@watrix.ai |
| Pseudocode | No | The paper describes the proposed methods in detail but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/wzb-bupt/QAGait. |
| Open Datasets | Yes | Datasets. Our method is primarily evaluated on two in-the-wild gait datasets, Gait3D (Zheng et al. 2022) and GREW (Zhu et al. 2021), due to their complex data collection environments and various covariates in outdoor scenes. We also include CASIA-B (Yu, Tan, and Tan 2006) since it utilizes outdated background subtraction for segmentation. |
| Dataset Splits | No | We strictly follow the original training and test settings. (This refers to pre-defined settings of the datasets but does not explicitly state the splits, especially for validation.) |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions the use of 'Open CV' functions (e.g., cv2.connectedComponents()), but it does not specify any version numbers for Open CV or other software dependencies, such as deep learning frameworks. |
| Experiment Setup | Yes | Training Details. We adopt the latest Gait Base (Fan et al. 2023) as our backbone. All data are normalized to 64 44. We randomly select P identities and corresponding K sequences in a mini-batch (i.e., {32,4} for Gait3D / GREW and {8,16} for CASIA-B). Each sequence contains 30 randomly sampled silhouettes. We use SGD optimizer with weight decay 5e-4. The initial learning rate is 0.1 and decays 10 times at each milestone (i.e., {20K,40K,50K} for Gait3D / CASIA-B, and {80K,120K,150K} for GREW). The total iterations are 60K for Gait3D / CASIA-B and 180K for GREW. The overall loss function is: L = L1 + L3. Implementation Details. For a fair comparison, we retain at least 15 frames for a sequence when too many frames need to be removed in the quality assessment step. For the margin setting, we apply grid search and select the optical (m1 = 0.1, s = 8) for Arc Face (Table 6), and our QACE follows this setting. For QATriplet, we follow the latest gait research and set m2 = 0.15 to achieve the average margin of map and man is around 0.2. The threshold3 in Maximal Connect Area and Template Match is ϵ = 0.95 and τ = 0.001. The random disturbance in Alignment is set as θ = 5 . |