Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Modality-Agnostic Topology Aware Localization

Authors: Farhad Ghazvinian Zanjani, Ilia Karmanov, Hanno Ackermann, Daniel Dijkman, Simone Merlin, Max Welling, Fatih Porikli

NeurIPS 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate decimeter-level accuracy for localization using different sensory inputs.
Researcher Affiliation Industry Farhad G. Zanjani Ilia Karmanov Hanno Ackermann Daniel Dijkman Simone Merlin Max Welling Fatih Porikli Qualcomm AI Research EMAIL
Pseudocode Yes Algorithm 1: Jointly learning the embedding and the transportation plan.
Open Source Code No The paper references 'https://github.com/the Jolly Sin/mazelib (GNU General Public License v3.0)' which is a third-party library used for synthetic data, but does not provide a link or explicit statement for the open-sourcing of their own method's code.
Open Datasets Yes We setup an experiment by using the i Gibson dataset [Shen et al., 2020]. To validate our approach without the added complexities of data, we first setup an experiment with a simple 2D Maze environment1 [Henriques and Vedaldi, 2018, Parisotto and Salakhutdinov, 2017]. 1https://github.com/the Jolly Sin/mazelib (GNU General Public License v3.0)
Dataset Splits No The paper mentions 'We create several image sequences by navigating the camera through all rooms/zones of each environments. We leave out some of the image sequences for the test set.' for the iGibson dataset, but does not provide specific percentages or counts for train/validation/test splits.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., specific library versions or framework versions like PyTorch 1.9).
Experiment Setup No For details about the implementation, hyper-parameters and training please refer to the supplementary material. This indicates that specific experimental setup details are not in the main text.