E2PNet: Event to Point Cloud Registration with Spatio-Temporal Representation Learning

Authors: Xiuhong Lin, Changjie Qiu, zhipeng cai, Siqi Shen, Yu Zang, Weiquan Liu, Xuesheng Bian, Matthias Müller, Cheng Wang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the MVSEC and VECtor datasets demonstrate the superiority of E2PNet over hand-crafted and other learning-based methods.
Researcher Affiliation Collaboration a Fujian Key Lab of Sensing and Computing for Smart Cities, School of Informatics, Xiamen University (XMU), China. b Key Laboratory of Multimedia Trusted Perception and Efficient Computing, XMU, China. c Intel Labs. d Apple Inc. e Yancheng Institute Of Technology, China.
Pseudocode No The paper describes its methods in detail through text and diagrams (Figure 2, 3) but does not include any formal pseudocode or algorithm blocks.
Open Source Code No The abstract states 'The source code can be found at: E2PNet.', but it does not provide a direct link (URL) to a repository or explicitly state that it is in supplementary materials.
Open Datasets Yes We use the widely used MVSEC [43] and VECtor [44] to build the MVSEC-E2P and VECtor-E2P datasets, which incorporate Li DAR, traditional cameras and event cameras at the same time.
Dataset Splits Yes MVSEC [43] uses a 16-beam Li DAR and an event camera with a resolution of (346,260)... we use the indoor-x and indoor-y sequences for training and testing respectively, where x [1, 3] and y = 4. VECtor [44] uses a 128-beam Li DAR and an event camera with a resolution of (640,480)... We use the units-dolly, units-scooter, corridors-dolly and corridors-walk sequences for training, and the school-dolly and school-scooter sequences for evaluation.
Hardware Specification Yes Training is done following the setup of individual baselines on a 3090Ti GPU. All experiments were performed with a batch size of 1 on a 3090Ti GPU using the MVSEC-E2P dataset.
Software Dependencies No All methods are implemented using Pytorch. However, the paper does not specify the version number for Pytorch or any other software dependencies.
Experiment Setup Yes Implementation Details. We follow the FEN [26, 20] principle and acquire 20000 consecutive events at a time and sample N = 8192 events from them... All methods are implemented using Pytorch. Training is done following the setup of individual baselines on a 3090Ti GPU... All experiments were performed with a batch size of 1 on a 3090Ti GPU using the MVSEC-E2P dataset.