Learning Interaction-aware 3D Gaussian Splatting for One-shot Hand Avatars

Authors: Xuan Huang, Hanhui Li, Wanquan Liu, Xiaodan Liang, Yiqiang Yan, Yuhao Cheng, CHENQIANG GAO

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our proposed method is validated via extensive experiments on the large-scale Inter Hand2.6M dataset, and it significantly improves the state-of-the-art performance in image quality.
Researcher Affiliation Collaboration Xuan Huang1 , Hanhui Li1 , Wanquan Liu1, Xiaodan Liang1, Yiqiang Yan2, Yuhao Cheng2, Chengqiang Gao1 1Shenzhen Campus of Sun Yat-Sen University 2Lenovo Research
Pseudocode No The paper describes the methodology in text and diagrams but does not provide any pseudocode or algorithm blocks.
Open Source Code Yes Project Page: https://github.com/Xuan Huang0/Guassian Hand.
Open Datasets Yes Our experiments are conducted on the publicly available Interhand2.6M dataset [19] (CC-BY-NC 4.0 licensed)
Dataset Splits No The paper specifies a 'training set' and 'testing set' but does not explicitly detail a separate 'validation' split with sizes or percentages.
Hardware Specification Yes Our network is trained on three A6000 GPUs
Software Dependencies No The paper mentions using the 'Adam optimizer' but does not specify versions for programming languages, machine learning frameworks, or other key software libraries.
Experiment Setup Yes Our network is trained on three A6000 GPUs using the Adam optimizer [44] with the learning rates of 1 10 4 for eight epochs. Loss weights in Eq. (3) are set as λrgb = 10.0, λV GG = 0.1. For interaction detection, we set Nc = 100 and T = 90. For self-adaptive GRM, we set Td = 0.1 and Ts = 0.9. ... The one-shot fitting takes 50 optimization steps with the learning rate of 1 10 2. ... Loss weights in Eq. (4,5) are set as λmask = 1.0, λreg = 0.01.