A Closed-Loop Perception, Decision-Making and Reasoning Mechanism for Human-Like Navigation
Authors: Wenqi Zhang, Kai Zhao, Peng Li, Xiao Zhu, Yongliang Shen, Yanna Ma, Yingfeng Chen, Weiming Lu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show our method is more adaptable to novel scenarios compared with state-of-the-art approaches. We assess our algorithm on two benchmarks that we design for evaluating navigation ability in few-shot and zero-shot scenes. Experiments demonstrate that our approach achieves significant improvement over various baselines, and is more reliable in novel scenarios. In addition, we deploy the algorithm to a real robot in a crowded building. |
| Researcher Affiliation | Collaboration | Wenqi Zhang1 , Kai Zhao2 , Peng Li3,6 , Xiao Zhu4 , Yongliang Shen1 , Yanna Ma5 , Yingfeng Chen2 and Weiming Lu1 1College of Computer Science and Technology, Zhejiang University 2Netease Fuxi Robot Department 3Institute of Software, Chinese Academy of Sciences 4College of Mechanical Engineering, Zhejiang University of Technology 5University of Shanghai for Science and Technology 6University of Chinese Academy of Sciences Nanjing |
| Pseudocode | No | The paper describes the model and learning processes but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Datasets and part of the code will be released at https://github. com/zwq2018/CL PDR NAV. |
| Open Datasets | Yes | Since there is no publicly available navigation dataset, we adopt A algorithm to plan a global path in a static scene for the first stage of pre-training... Finally, we collect 200 trajectories, each with about 300 pair data {ot, at}. and Datasets and part of the code will be released at https://github. com/zwq2018/CL PDR NAV. |
| Dataset Splits | No | The paper mentions training and testing scenarios/benchmarks but does not explicitly provide details about dataset validation splits, percentages, or sample counts for reproduction. |
| Hardware Specification | No | The paper specifies the hardware used for the real-world robot deployment (Velodyne-16, Real-Sense D435, NVIDIA Jetson AGX Xavier) but does not provide details about the specific hardware (e.g., GPU/CPU models) used for training the models during the experimental phases. |
| Software Dependencies | No | The paper mentions 'ROS-Stage', 'Open AI Gym', 'Py Torch', 'Microsoft s NNI', 'Ubuntu', and 'ROS Melodic' but does not provide specific version numbers for the key software libraries and dependencies used in the experiments. |
| Experiment Setup | Yes | We set the latent state to 90 dimensions, and the sequence length n is 20, and α, β, η, λ, γ are 1.0, 20.0, 5e-4, 0.01, 0.99. We discover that updating the reasoning model once every 10 PPO updates is more appropriate. ... We use optimizer Adam with a learning rate of 1e-3 in VAE-Enhanced Demonstration learning and 3e5 in RL-Enhanced Interaction learning. |