Learning-based Motion Planning in Dynamic Environments Using GNNs and Temporal Encoding
Authors: Ruipeng Zhang, Chenning Yu, Jingkai Chen, Chuchu Fan, Sicun Gao
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that the proposed methods can significantly accelerate online planning over state-of-the-art complete dynamic planning algorithms. We evaluate the proposed approach in various challenging dynamic motion planning environments ranging from 2-Do F to 7-Do F KUKA arms. |
| Researcher Affiliation | Academia | The provided text does not contain explicit institutional affiliations (university/company names or email domains) for the authors. Therefore, classification of affiliation type is not possible from the given information. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks with explicit labels like 'Algorithm' or 'Pseudocode'. |
| Open Source Code | No | The paper does not provide any concrete access to source code, such as a repository link or an explicit statement about code release. |
| Open Datasets | No | The paper states 'We randomly generate 2000 problems for training and 1000 problems for testing.' indicating a custom-generated dataset, but it does not provide concrete access information (e.g., link, DOI, or formal citation to a public dataset). |
| Dataset Splits | No | The paper specifies generated training and testing data ('We randomly generate 2000 problems for training and 1000 problems for testing.') but does not explicitly mention a validation dataset split or its size/percentage. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or their version numbers required to replicate the experiment. |
| Experiment Setup | Yes | We first train the GNN-TE on all the training problems for 200 epochs. Afterward, we generate 1000 new training data with DAgger, and trained for another 100 epochs. |