Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

FreeCap: Hybrid Calibration-Free Motion Capture in Open Environments

Authors: Aoru Xue, Yiming Ren, Zining Song, Mao Ye, Xinge Zhu, Yuexin Ma

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on Human-M3 and Free Motion datasets demonstrate that our method significantly outperforms state-of-the-art single-modal methods, offering an expandable and efficient solution for multi-person motion capture across various applications. ... Furthermore, we also present comprehensive ablation studies to evaluate the necessity of our network modules and fusion method, and the efficiency and generalization of our matching strategy.
Researcher Affiliation Collaboration Aoru Xue1,*, Yiming Ren1,*, Zining Song1, Mao Ye2, Xinge Zhu3, Yuexin Ma1, 1Shanghai Tech University 2Inceptio Technology 3Shanghai Jiao Tong University EMAIL
Pseudocode Yes Algorithm 1: Pose-aware Cross-sensor Matching ... Algorithm 2: Opt Match
Open Source Code No The paper does not provide an explicit statement or a link to an open-source code repository for the methodology described.
Open Datasets Yes Extensive experiments conducted on the multi-person large-scale dataset Human-M3(Fan et al. 2023a) and the multi-view sensor dataset Free Motion(Ren et al. 2024), demonstrate that our method achieves significant improvements in human pose compared to other singlemodal SOTA methods. ... For the experiment, we consistently employed the SURREAL (Varol et al. 2017) dataset for pre-training throughout our experimental procedure, mirroring the approach adopted by Live HPS.
Dataset Splits No The paper refers to using the 'testing dataset' of Free Motion and Human-M3 but does not provide specific percentages, sample counts, or detailed methodologies for how the datasets were split into training, validation, and test sets for their experiments.
Hardware Specification Yes We build our framework on Py Torch 2.0.0 and CUDA 11.8 and run the whole process on a server equipped with an Intel(R) Xeon(R) 444 E5-2678 CPU and 8 NVIDIA RTX3090 GPUs.
Software Dependencies Yes We build our framework on Py Torch 2.0.0 and CUDA 11.8 and run the whole process on a server equipped with an Intel(R) Xeon(R) 444 E5-2678 CPU and 8 NVIDIA RTX3090 GPUs.
Experiment Setup Yes For PCM, we set δ to 100, λ0 to 0.1 and niter to 2 to get a stable matching. During the training of SPO, we train the network over 500 epochs with batch size of 32 and sequence length of 32, using an initial learning rate of 10 4, and Adam W optimizer with weight decay of 10 4. We set λ1 = 1, λ2 = 1, λ3 = 0.01 throughout our experiment. As for SMPL solver, we set λ4 = 1, λ5 = 0.2, λ6 = 10 and λ7 = 1.