Posture-Informed Muscular Force Learning for Robust Hand Pressure Estimation

Authors: Kyungjin Seo, Junghoon Seo, Hanseok Jeong, Sangpil Kim, Sang Ho Yoon

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation and analysis of experiments demonstrate the improved performance of our approach, showing its consistent superiority over existing s EMG-based and vision-based methods.
Researcher Affiliation Academia 1 Graduate School of Culture Technology, KAIST, South Korea 2 Graduate School of Metaverse, KAIST, South Korea 3 Department of Artificial Intelligence, Korea University, South Korea
Pseudocode No The paper describes the methodology in prose and figures (Figure 2), but does not contain explicit pseudocode or algorithm blocks.
Open Source Code Yes Video demos, data, and code are available online.1 Project Page: https://hci-tech-lab.github.io/Pi MForce/
Open Datasets Yes Video demos, data, and code are available online.1 Project Page: https://hci-tech-lab.github.io/Pi MForce/
Dataset Splits Yes We collected 1,980 seconds of synchronized multimodal data per participant (30 seconds × 22 hand-object interactions × 3 sessions). Here, we used data from 2 sessions for training while the remaining session was reserved for evaluation.
Hardware Specification Yes The model training was conducted on a computing environment equipped with an AMD EPYC 7763 64-Core Processor, 1.0TB RAM, and an NVIDIA RTX 6000A 48Gi. In our inference setup, we used a computer equipped with a 12th Generation Intel Core i9-12900K processor, 32GB of RAM, and an NVIDIA Ge Force RTX 4080 with 16GB of GPU memory.
Software Dependencies No The paper mentions deep learning models and optimizers (Adam) but does not specify software dependencies with version numbers.
Experiment Setup Yes We trained the model for 50 epochs to ensure adequate training steps. We set the batch size to 64 to balance computational efficiency and training stability. The model was optimized using the Adam optimizer [77] with a learning rate of 0.0001 and (β1, β2) of (0.9, 0.999), along with a weight decay of 0.0001.