Fine-Grained Multi-View Hand Reconstruction Using Inverse Rendering
Authors: Qijun Gan, Wentong Li, Jinwei Ren, Jianke Zhu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct the comprehensive experiments on Inter Hand2.6M, Deep Hand Mesh and dataset collected by ourself, whose promising results show that our proposed approach outperforms the state-of-the-art methods on both reconstruction accuracy and rendering quality. |
| Researcher Affiliation | Academia | Qijun Gan, Wentong Li, Jinwei Ren, Jianke Zhu* College of Computer Science and Technology, Zhejiang University, China {ganqijun,liwentong,zijinxuxu,jkzhu}@zju.edu.cn |
| Pseudocode | No | The paper describes its methodology in narrative text and uses equations, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and dataset are publicly available at https://github.com/agn Jason/FMHR. |
| Open Datasets | Yes | Inter Hand2.6M. Inter Hand2.6M (Moon et al. 2020) is a large-scale dataset... Deep Hand Mesh. The Deep Hand Mesh dataset (Moon, Shiratori, and Lee 2020)... Code and dataset are publicly available at https://github.com/agn Jason/FMHR. |
| Dataset Splits | No | The paper mentions using Inter Hand2.6M for pre-training and fine-tuning, and evaluating on 'the rest views' for one specific experiment, but it does not provide comprehensive training/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility across all datasets. |
| Hardware Specification | Yes | Notably, the entire optimization pipeline is computationally efficient, which takes approximately 90 seconds on a single NVIDIA 3090Ti GPU. |
| Software Dependencies | No | The paper mentions the Adam optimizer but does not list specific software dependencies (e.g., programming languages, libraries, frameworks) with their version numbers required for reproducibility. |
| Experiment Setup | Yes | During the optimization process, we utilize the Adam optimizer (Kingma and Ba 2014) with the balanced weights of λ1 = 20, λ2 = 40, λ3 = 20, λ4 = 100, and λ5 = 2 to jointly optimize the vertices, vertex albedo, and lighting coefficients over 100 iterations. Subsequently, for fine-tuning and joint optimization, each process requires 100 epochs of training with γ1 = 100 and γ2 = 2, respectively. |