Multiple Thinking Achieving Meta-Ability Decoupling for Object Navigation

Authors: Ronghao Dang, Lu Chen, Liuyi Wang, Zongtao He, Chengju Liu, Qijun Chen

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on AI2-Thor and Robo THOR, we demonstrate that our method outperforms stateof-the-art (SOTA) methods on both typical and zero-shot object navigation tasks.
Researcher Affiliation Academia 1Department of Control Science and Engineering, Tongji University, Shanghai 201804, China. Correspondence to: Chengju Liu <liuchengju@tongji.edu.cn>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets Yes Datasets AI2-Thor (Kolve et al., 2017) and Robo THOR (Deitke et al., 2020) are our primary experimental platforms. AI2-Thor includes 30 different floorplans for each of 4 room layouts: kitchen, living room, bedroom, and bathroom. For each scene type, we use 20 rooms for training, 5 rooms for validation, and 5 rooms for testing. Robo THOR consists of a set of 89 apartments, 75 of which are accessible. we use 60 for training and 15 for validation.
Dataset Splits Yes For each scene type, we use 20 rooms for training, 5 rooms for validation, and 5 rooms for testing. Robo THOR consists of a set of 89 apartments, 75 of which are accessible. we use 60 for training and 15 for validation.
Hardware Specification Yes We train our model with 18 workers on 2 RTX 2080Ti Nvidia GPUs, in a total of 3M navigation episodes.
Software Dependencies No The paper mentions using the asynchronous advantage actor-critic (A3C) algorithm, ResNet18, DETR, and LSTM, but does not specify exact version numbers for programming languages, libraries, or frameworks (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x).
Experiment Setup Yes The dropout rate is set to 0.3, and the meta-ability reward RMA is only utilized in the first 0.2M (C) episodes.