Hand-Object Interaction Image Generation
Authors: Hezhen Hu, Weilun Wang, Wengang Zhou, Houqiang Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two large-scale datasets, i.e., HO3Dv3 and Dex YCB, demonstrate the effectiveness and superiority of our framework both quantitatively and qualitatively. |
| Researcher Affiliation | Academia | 1CAS Key Laboratory of GIPAS, EEIS Department University of Science and Technology of China (USTC) |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The code will be available at https://github. com/play-with-HOI-generation/HOIG. We include all the necessary code, instructions and environment needed in the Supplementary Materials or Github repository. |
| Open Datasets | Yes | Datasets. We evaluate our method on two large-scale datasets with annotated hand-object mesh representation, i.e., HO3Dv3 [14] and Dex YCB [6]. HO3Dv3 is captured in the real-world setting. It contains 10 different subjects performing various fine-grained manipulation on one among 10 objects from YCB models [3]. The training and testing set contain 58,148 and 13,938 images, respectively. Dex YCB is recorded in the controlled environment, with 10 subjects manipulating one among 20 objects. In our experiment, we choose the frames containing interaction between hand and object, with 33,562 and 8,554 images for training and testing, respectively. |
| Dataset Splits | No | The paper specifies training and testing sets, but does not explicitly mention a validation set or its split details. |
| Hardware Specification | Yes | The whole framework is implemented on Py Torch and we perform experiments on 4 NVIDIA RTX 3090. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number or list other key software dependencies with their versions. |
| Experiment Setup | Yes | The Adam optimizer is adopted and the training lasts 30 epochs. We set the batch size to 8 in our experiment. The learning rate is set as 2e-4 for the first 15 epochs and linearly decays to 2e-6 till the end. The hyperparameter λ1 and λ2 are set to 10 and 10, respectively. |