Contact2Grasp: 3D Grasp Synthesis via Hand-Object Contact Constraint
Authors: Haoming Li, Xinzhuo Lin, Yang Zhou, Xiang Li, Yuchi Huo, Jiming Chen, Qi Ye
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive validations on two public datasets show that our method outperforms state-of-the-art methods regarding grasp generation on various metrics. [...] 4 Experiment 4.1 Implementation Details 4.2 Datasets 4.3 Evaluation Metrics 4.4 Comparison with State-of-Arts 4.5 Ablation Study |
| Researcher Affiliation | Collaboration | Haoming Li1 , Xinzhuo Lin1 , Yang Zhou3 , Xiang Li3 , Yuchi Huo4,5 , Jiming Chen1,2 and Qi Ye1,2 1 College of Control Science and Engineering, Zhejiang University, China 2 Key Lab of CS&AUS of Zhejiang Province, China 3OPPO US Research Center 4State Key Lab of CAD&CG, Zhejiang University, China 5Zhejiang Lab, China |
| Pseudocode | No | The paper describes the method pipeline and network architectures but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | Extensive validations on two public datasets show that our method outperforms state-of-the-art methods regarding grasp generation on various metrics. [...] Obman. We first validate our framework on the Obman dataset [Hasson et al., 2019] [...] Contact Pose. The Contact Pose dataset [Brahmbhatt et al., 2020] is a real dataset for studying hand-object interaction, which captures both ground-truth thermal contact maps and hand-object poses. |
| Dataset Splits | No | The paper states: 'We manually split the dataset into a training and test group according to the object type.' It does not explicitly mention a 'validation' split with specific percentages or counts for reproducing the data partitioning. |
| Hardware Specification | Yes | All the experiments were implemented in PyTorch, in which our models ran 130 epochs in a single RTX 3090 GPU with 24GB memory. |
| Software Dependencies | No | All the experiments were implemented in PyTorch, in which our models ran 130 epochs in a single RTX 3090 GPU with 24GB memory. (Only 'PyTorch' is mentioned, no version number or other specific software dependencies with versions.) |
| Experiment Setup | Yes | Our method is trained using a batch size of 32 examples, and an Adam optimizer with a constant learning rate of 1e-4. The training dataset is randomly augmented with [ 1, 1]cm translation and rotation at three (XYZ) dimensions. All the experiments were implemented in Py Torch, in which our models ran 130 epochs in a single RTX 3090 GPU with 24GB memory. In the refinement process, each input is optimized for 200 steps. [...] we set γ0 = 0.5, γ1 = 0.5 and γ2 = 1e 3 are constants for balancing the loss terms. [...] λv=35, λt=0.1 and λp=0.1 are constants balancing the losses. [...] λptr=5 and λcst=0.05 denote the corresponding loss weights. [...] We set ω0=0.1, ω1=2 and ω2=0.2. |