NeoNav: Improving the Generalization of Visual Navigation via Generating Next Expected Observations
Authors: Qiaoyun Wu, Dinesh Manocha, Jun Wang, Kai Xu10001-10008
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have conducted extensive evaluations on both real-world and synthetic benchmarks, and show that our model consistently outperforms the state-of-the-art models in terms of success rate, data efficiency, and generalization. |
| Researcher Affiliation | Academia | 1Nanjing University of Aeronautics and Astronautics, 2The University of Maryland 3National University of Defense Technology |
| Pseudocode | No | The paper describes the network architecture and algorithm details in text and figures (e.g., Figure 1 for model overview), but it does not include a dedicated pseudocode block or algorithm figure. |
| Open Source Code | Yes | (Project page: http://kevinkaixu.net/projects/neonav.html) |
| Open Datasets | Yes | We conducted extensive evaluations on public datasets of both synthetic (AI2-THOR framework (Zhu et al. 2017)) and real-world (Active Vision Dataset, AVD (Ammirato et al. 2017)) scenes. |
| Dataset Splits | Yes | AVD contains 11 relatively complex real-world houses, of which 8 houses were used for training and 3 for testing. AI2-THOR contains 120 scenes in four categories including kitchen, living room, bedroom, and bathroom. Each category includes 30 scenes, out of which 20 are used for training and 10 for testing. |
| Hardware Specification | No | The paper does not specify any particular hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | Yes | The three hyperparameters are empirically set as α = 0.01, β = 0.0001 and γ = 1 throughout our experiments. |