MuSCLE: Multi Sweep Compression of LiDAR using Deep Entropy Models

Authors: Sourav Biswas, Jerry Liu, Kelvin Wong, Shenlong Wang, Raquel Urtasun

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that our method significantly reduces the joint geometry and intensity bitrate over prior state-of-the-art Li DAR compression methods, with a reduction of 7 17% and 6 19% on the Urban City and Semantic KITTI datasets respectively.
Researcher Affiliation Collaboration Sourav Biswas1,2 Jerry Liu1 Kelvin Wong1,3 Shenlong Wang1,3 Raquel Urtasun1,3 1Uber Advanced Technologies Group 2University of Waterloo 3University of Toronto {souravb,jerryl,kelvin.wong,slwang,urtasun}@uber.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to its own source code via a specific repository link or explicit release statement. Links provided are for third-party baselines (MPEG Anchor, MPEG TMC13) and a library (zlib).
Open Datasets Yes We validate the performance of our approach on two large datasets, namely Urban City [7] and Semantic KITTI [8]. [7] Ming Liang, Bin Yang, Wenyuan Zeng, Yun Chen, Rui Hu, Sergio Casas, and Raquel Urtasun. Pnpnet: End-to-end perception and prediction with tracking in the loop, 2020. [8] Jens Behley, Martin Garbade, Andres Milioto, Jan Quenzel, Sven Behnke, Cyrill Stachniss, and Jurgen Gall. Semantickitti: A dataset for semantic scene understanding of lidar sequences. In IEEE International Conference on Computer Vision, ICCV), 2019.
Dataset Splits Yes We train our entropy models on 5000 sequences and evaluate on a test set of 500. ... In our experiments, we use the official train/test splits: sequences 00 to 10 (except for 08) for training and sequences 11 to 21 to evaluate reconstruction quality. Since semantic labels for the test split are unavailable, we evaluate downstream tasks on the validation sequence 08.
Hardware Specification No The paper states "distribute training over 16 GPUs" but does not specify the exact GPU models or any other specific hardware details required for reproduction.
Software Dependencies No The paper mentions implementation in PyTorch and use of Horovod, but does not provide specific version numbers for these software components.
Experiment Setup Yes Our models use Kans = 4 rounds of aggregation and k = 5 nearest neighbors for continuous convolution. ... We train our models over 150,000 steps using the Adam optimizer [48] with a learning rate of 1e 4 and a batch size of 16.