NTFields: Neural Time Fields for Physics-Informed Robot Motion Planning
Authors: Ruiqi Ni, Ahmed H Qureshi
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method in various cluttered 3D environments, including the Gibson dataset, and demonstrate its ability to solve motion planning problems for 4-DOF and 6-DOF robot manipulators where the traditional grid-based Eikonal planners often face the curse of dimensionality. Furthermore, the results show that our method exhibits high success rates and significantly lower computational times than the state-of-the-art methods, including NMPs that require training data from classical planners. |
| Researcher Affiliation | Academia | Ruiqi Ni Department of Computer Science Purdue University ni117@purdue.edu Ahmed H. Qureshi Department of Computer Science Purdue University ahqureshi@purdue.edu |
| Pseudocode | No | No explicit pseudocode or algorithm blocks were found. The methodology is described using text and neural network architecture diagrams. |
| Open Source Code | Yes | Our code is released: https://github.com/ruiqini/ NTFields. |
| Open Datasets | Yes | We evaluate our method in various cluttered 3D environments, including the Gibson dataset... and ...we use OMPL s RRT*, Lazy PRM* implementation ( Sucan et al., 2012) and the pykonal library (White et al., 2020) for FMM on 100 100 100 resolution grids. |
| Dataset Splits | No | No explicit training/test/validation split percentages or counts were provided. The paper mentions using '1 million start and goal pairs to train our models' and '2000 unseen start and goal pairs for solving motion planning tasks' (for testing), but no details on validation splits. |
| Hardware Specification | Yes | All experiments were performed on a device with a 3.50GHz 8 Intel Core i9 processor, 32 GB RAM, and Ge Force RTX 3090 GPU. |
| Software Dependencies | No | No specific software versions for libraries or frameworks were provided. The paper mentions using 'OMPL's RRT*, Lazy PRM* implementation', 'MPNet s open source code', and 'pykonal library' for FMM, but without version numbers. |
| Experiment Setup | Yes | We train our models end-to-end with their objective functions using the Adam W (Loshchilov & Hutter, 2017) optimizer. ... The λ is set to 0 for the first l0 epochs, then increased linearly to 1 between epoch l0 to l1, and then remains at 1 for the rest of the epochs. We observed that gradually increasing λ stabilizes the training process, preventing it from converging to local minima. We choose l0 = 500, l1 = 1000 for Gibson scenes. ... where α R is a step size hyperparameter. |