SkeletonGait: Gait Recognition Using Skeleton Maps

Authors: Chao Fan, Jingzhe Ma, Dongyang Jin, Chuanfu Shen, Shiqi Yu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Beyond achieving state-of-the-art performances over five popular gait datasets, more importantly, Skeleton Gait uncovers novel insights about how important structural features are in describing gait and when they play a role. Furthermore, we propose a multi-branch architecture, named Skeleton Gait++, to make use of complementary features from both skeletons and silhouettes. Experiments indicate that Skeleton Gait++ outperforms existing state-of-the-art methods by a significant margin in various scenarios.
Researcher Affiliation Academia 1Research Institute of Trustworthy Autonomous System, Southern University of Science and Technology 2Department of Computer Science and Engineering, Southern University of Science and Technology 3The University of Hong Kong
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes The source code is available at https://github.com/Shiqi Yu/Open Gait.
Open Datasets Yes OU-MVLP (Takemura et al. 2018), GREW (Zhu et al. 2021), Gait3D (Zheng et al. 2022), SUSTech1K (Shen et al. 2023), and CCPG (Li et al. 2023).
Dataset Splits No The paper mentions 'Train Set' and 'Test Set' for datasets but does not explicitly provide details on a 'validation' dataset split for reproduction. Table 1 lists 'Milestones' related to training steps, not validation splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions that its code is integrated into 'Open Gait' but does not provide specific ancillary software details, such as library names with version numbers (e.g., Python 3.8, PyTorch 1.9), needed to replicate the experiment.
Experiment Setup Yes Table 1 displays the main hyper-parameters of our experiments. Unless otherwise specified, a) Different datasets often employ distinct pose data formats... b) Deep Gait V2 denotes its pseudo-3D variant... c) The double-side cutting strategy... The input size of skeleton maps is 2 64 44. d) At the test phase... As for the training stage, the data sampler collects a fixed-length segment of 30 frames as input. e) The spatial augmentation strategy suggested by (Fan et al. 2022) is adopted. f) The SGD optimizer with an initial learning rate of 0.1 and weight decay of 0.0005 is utilized. g) The σ controlling the variance in Eq. 2 and Eq. 3 is set to 8.0 as default.