Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
NextBestPath: Efficient 3D Mapping of Unseen Environments
Authors: Shiyao Li, Antoine Guedon, Clémentin Boittiaux, Shizhe Chen, Vincent Lepetit
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | By leveraging online data collection, data augmentation and curriculum learning, NBP significantly outperforms state-of-the-art methods on both the existing MP3D dataset and our Ai MDoom dataset, achieving more efficient mapping in indoor environments of varying complexity. |
| Researcher Affiliation | Academia | 1LIGM, Ecole Nationale des Ponts et Chaussees, IP Paris, Univ Gustave Eiffel, CNRS, France {firstname.lastname}@enpc.fr 2Inria, Ecole normale sup erieure, CNRS, PSL Research University, France {firstname.lastname}@inria.fr |
| Pseudocode | Yes | Algorithm 1 Training procedure. |
| Open Source Code | No | Project page: https://shiyao-li.github.io/nbp/ |
| Open Datasets | Yes | We evaluate our model on the Matterport3D (MP3D) dataset (Chang et al., 2017) and our own Ai MDoom dataset. |
| Dataset Splits | Yes | For MP3D, we use the same setting as prior work (Yan et al., 2023) for fair comparison... with 10 and 5 scenes in training and evaluation respectively. For Ai MDoom, we utilize a 70/30 train/test split for scenes in each difficulty level. |
| Hardware Specification | Yes | The training is performed on a single NVIDIA RTX A6000 GPU, with an average completion time of 25 hours. |
| Software Dependencies | Yes | We converted the maps to the widely used OBJ format, and used Blender (Community, 2018) to consolidate the texture images of each map into a single texture image. This makes the maps compatible with Pytorch3D (Ravi et al., 2020) and Open3D (Zhou et al., 2018). |
| Experiment Setup | Yes | The model is trained for at most N = 15 iterations, with the first Ne = 1 iterations using easier samples and Sn = 2 trajectories per scene. For subsequent iterations, we use all samples and reduce the trajectory count to Sn = 1 per scene. Each trajectory has a length of 100 steps and starts at a random location. During the first data collection iteration, we randomly sample 1,000 validation examples from memory and exclude them from training. Gradient accumulation is used in training which results in an effective batch size of 448. The learning rate is set to 0.001 and is decayed by a factor of 0.1 if the validation loss plateaus. We apply early stopping to terminate training when validation loss no longer decreases. |