Learning Score-based Grasping Primitive for Human-assisting Dexterous Grasping

Authors: Tianhao Wu, Mingdong Wu, Jiyao Zhang, Yunchong Gan, Hao Dong

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the superiority of our proposed method compared to baselines, highlighting user awareness and practicality in real-world applications.In our experiments, we evaluate several methods on a dexterous grasping environment that assists humans in grasping over 4900+ on-table objects with up to 200 realistic human wrist movement patterns.
Researcher Affiliation Academia Tianhao Wu 1,2,3*, Mingdong Wu 1,3*, Jiyao Zhang1,2,3, Yunchong Gan1, Hao Dong1,3 1 Center on Frontiers of Computing Studies, School of Computer Science, Peking University 2 Beijing Academy of Artificial Intelligence 3 National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
Pseudocode No The paper describes the methods in prose and mathematical formulations but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The codes and demonstrations can be viewed at https://sites.google.com/view/graspgf.
Open Datasets Yes We created our success grasping pose based on the Uni Dex Grasp dataset [10].To mimic real human grasping patterns, we resampled 200 real human grasping wrist trajectories from Handover Sim! [28]
Dataset Splits Yes The dataset was split into three sets: training instances (3127 objects, 363,479 grasps), seen category unseen instances (519 objects, 2595 grasps), and unseen category instances (1298 objects, 6490 grasps).
Hardware Specification Yes It takes 60 hours to train on a single A100 for primitive policy to converge.We trained the residual policy for a total of 10 million agent steps, which took approximately 15 hours using a single A100 GPU.We evaluate the inference speed on the GTX 1650, which is also used in our real-world experiment.
Software Dependencies No The paper mentions "Py Torch implementation" but does not provide specific version numbers for software dependencies.
Experiment Setup Yes we set the number of update intervals (nsteps) to 50, the number of optimization epochs (noptepochs) to 2, the mini-batch size mini_batch_size(mini_batch_size) to 64, and the discount factor (gamma) to 0.99.Empirically, we set λs = 1.0, λa = 0.09, and λh = 0.5 for our approach.