GraphMP: Graph Neural Network-based Motion Planning with Efficient Graph Search

Authors: Xiao Zang, Miao Yin, Jinqi Xiao, Saman Zonouz, Bo Yuan

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on a variety of environments, ranging from 2D Maze to 14D dual KUKA robotic arm, show that our proposed Graph MP achieves significant improvement on path quality and planning speed over state-of-the-art learning-based and classical planners; while preserving competitive success rate. We evaluate Graph MP and other learning-based and classical planners in a variety of environments, ranging from 2D Maze to 14D dual KUKA robotic arm. Experimental results show that our proposed Graph MP achieves significant improvement on various planning performance metrics (i.e., path quality and planning speed) over the state-of-the-art planning methods; while preserving the competitive success rate.
Researcher Affiliation Academia Xiao Zang1 Miao Yin2 Jinqi Xiao1 Saman Zonouz3 Bo Yuan1 1Department of Electrical and Computer Engineering, Rutgers University 2Department of Computer Science and Engineering, The University of Texas at Arlington 3School of Cybersecurity and Privacy, Georgia Institute of Technology 1{xz514, jx257, bo.yuan.ece}@rutgers.edu 2miao.yin@uta.edu 3szonouz6@gatech.edu
Pseudocode Yes Algorithm 1 End-to-End Training Framework Input: Full training set Dheu, neural heuristic estimator fheu(V, E, vg, Θheu) with weights Θheu, the max iteration Tmax, learning rate γ
Open Source Code No The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets No The paper describes the types of planning tasks and how the training datasets were prepared by randomly constructing RGGs and using an 'oracle collision checker' for labeling. While it refers to [8] for the task types, it does not provide access information (URL, DOI, or citation with author/year) for publicly available datasets used for training, or for their generated datasets.
Dataset Splits Yes For each environment, we prepare two different training datasets each of which consists of 2000 different workspaces, to train the neural collision checker and heuristic estimator separately. ... We also use 500 problem instances as the validation set. After the training, Graph MP is evaluated in an end-to-end manner on 1000 problem instances with unseen workspaces.
Hardware Specification Yes The experiments are conducted on a computere quipped with an AMDEPYC 74202P24-Core Processor and an NVIDIA RTXA6000 GPU.
Software Dependencies No The paper mentions using 'ADAM' as an optimizer but does not specify any programming languages or software libraries with version numbers (e.g., Python version, PyTorch/TensorFlow version, CUDA version) needed to replicate the experiment.
Experiment Setup Yes Our neural collision checker adopts 3 iterations of obstacle encoding with an output dimension of 64, and the neural heuristic estimator has 5 loops of message passing in Eq. 2 with the output dimension of 32. For both training of these two models, we select ADAM [32] as the optimizer and set the learning rate as 1e 3. The training epoch is 400 and the batch size is set as 8. We set the threshold θ in the in-search collision check as 80%. The number of graph nodes per sampling is 100 and the K value of K-NN is 10.