Learning to Map for Active Semantic Goal Navigation
Authors: Georgios Georgakis, Bernadette Bucher, Karl Schmeckpeper, Siddharth Singh, Kostas Daniilidis
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on the Matterport3D (MP3D) (Chang et al., 2017) dataset using the Habitat (Savva et al., 2019) simulator. |
| Researcher Affiliation | Collaboration | Georgios Georgakis*1, Bernadette Bucher*1, Karl Schmeckpeper1, Siddharth Singh2, Kostas Daniilidis1 1University of Pennsylvania, 2Amazon |
| Pseudocode | Yes | Algorithm 1: L2M for Object Nav |
| Open Source Code | Yes | Trained models and code can be found here: https://github.com/ggeorgak11/L2M. |
| Open Datasets | Yes | We perform experiments on the Matterport3D (MP3D) (Chang et al., 2017) dataset using the Habitat (Savva et al., 2019) simulator. |
| Dataset Splits | Yes | We use the standard train/val split as the test set is held-out for the online Habitat challenge, which contains 56 scenes for training and 11 for validation. |
| Hardware Specification | Yes | We executed training and testing on our internal cluster on RTX 2080 Ti GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch Paszke et al. (2017)' and 'Habitat (Savva et al., 2019) simulator' with citations, but does not provide explicit version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | The models are trained in the Py Torch Paszke et al. (2017) framework with Adam optimizer and a learning rate of 0.0002. All experiments are conducted with an ensemble size N = 4. For the semantic map prediction we receive RGB and depth observations of size 256 256 and define crop and global map dimensions as h = w = 64, H = W = 384 respectively. |