Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

RA-GAR: A Richly Annotated Benchmark for Gait Attribute Recognition

Authors: Chenye Wang, Saihui Hou, Aoqi Li, Qingyuan Cai, Yongzhen Huang

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the RA-GAR and MAGait datasets demonstrate the effectiveness of CLIP-GAR, showing significant improvements in mean accuracy and F1 score.
Researcher Affiliation Collaboration Chenye Wang1, Saihui Hou1,2*, Aoqi Li1, Qingyuan Cai1, Yongzhen Huang1,2* 1School of Artificial Intelligence, Beijing Normal University 2WATRIX.AI EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology in detailed paragraph text and uses a framework diagram (Figure 3) but does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper provides a link for the RA-GAR dataset (Datasets https://github.com/BNU-IVC/RA-GAR), but does not explicitly state or provide a link for the open-source code of the proposed methodology (CLIP-GAR).
Open Datasets Yes Datasets https://github.com/BNU-IVC/RA-GAR
Dataset Splits Yes The RA-GAR dataset is divided into two subsets: a training set consisting of 250 randomly selected subjects, and a test set composed of the remaining 288 subjects. The training set comprises 57,155 sequences, while the test set contains 65,912 sequences, both covering the full range of attributes.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or processor types used for running the experiments.
Software Dependencies No The paper mentions several software components like Paddle Seg, RTMPose, Vi TPose, and Motion Bert, citing the papers that introduced them, but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes In our experiments, the input silhouette sequences are standardized to a fixed resolution of 64 44 and a fixed length of 30 frames... For both the Align and Fusion stages, we use the Adam optimizer with a learning rate of 10 4, and a weight decay of 2 10 5. The epsilon is set to 10 6. The beta coefficients are β1 = 0.9 and β2 = 0.999. The batch size for both stage is set to 32, with the Align stage training for 120,000 iterations and the Fusion stage training for 20,000 iterations.