End-to-End Egospheric Spatial Memory
Authors: Daniel James Lenton, Stephen James, Ronald Clark, Andrew Davison
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through our broad set of experiments, we show that ESM provides a general computation graph for embodied spatial reasoning, and the module forms a bridge between real-time mapping systems and differentiable memory architectures. |
| Researcher Affiliation | Collaboration | Daniel Lenton 1, Stephen James 1, Ronald Clark 2, Andrew J. Davison 1 1 Dyson Robotics Lab, 2 Department of Computing, Imperial College London |
| Pseudocode | Yes | Algorithm 1: ESM Step |
| Open Source Code | Yes | Implementation at: https://github.com/ivy-dl/memory. |
| Open Datasets | Yes | For the object segmentation experiment, we use downsampled 60 80 and 120 160 images from the Scan Net dataset, which we first RGB-Depth align. |
| Dataset Splits | Yes | The losses for each network evaluated on the training set and validation set during the course of training are presented in Fig 12. |
| Hardware Specification | Yes | running on Nvidia RTX 2080 GPU |
| Software Dependencies | Yes | The implementation of our module is therefore jointly compatible with Tensor Flow 2.0, Py Torch, MXNet, Jax and Numpy. |
| Experiment Setup | Yes | All networks use a batch size of 16, an unroll size of 16, and are trained for 250k steps using an ADAM optimizer with 1e 4 learning rate. |