LiDAR-PTQ: Post-Training Quantization for Point Cloud 3D Object Detection

Authors: Sifan Zhou, Liang Li, Xinyu Zhang, Bo Zhang, Shipeng Bai, Miao Sun, Ziyu Zhao, Xiaobo Lu, Xiangxiang Chu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our Li DAR-PTQ can achieve stateof-the-art quantization performance when applied to Center Point (both Pillar-based and Voxel-based).
Researcher Affiliation Collaboration 1Southeast University 2Meituan Inc 3Zhejiang University 4Nanyang Technological University
Pseudocode Yes We formulate the our Li DAR-PTQ algorithm for a full precision 3D detector in Algorithm 2.
Open Source Code Yes Code will be released at https://github.com/Stiphy Jay/Li DAR-PTQ.
Open Datasets Yes To evaluate the effectiveness of our proposed Lidar-PTQ, we conduct main experiments on large-scale autonomous driving datasets, Waymo Open Dataset (WOD) (Sun et al., 2020).
Dataset Splits Yes In WOD dataset, we randomly sample 256 frames point cloud data from the training set as the calibration data. The calibration set proportions is 0.16% (256/158,081) for WOD. In nu Scenes dataset, the calibration set proportions are 0.91% (256/28,130).
Hardware Specification Yes We execute all experiments on a single Nvidia Tesla V100 GPU. For the speed test, the inference time of all comparison methods is measured on an NVIDIA Jeston AGX Orin.
Software Dependencies No The paper mentions using 'Center Point(Yin et al., 2021) official open-source codes based on Det3D (Zhu et al., 2019) framework' but does not provide specific version numbers for software components or libraries.
Experiment Setup Yes The learning rate for the activation quantization scaling factor is 5e-5, and for weight quantization rounding is 5e-3. In TGPL loss, we set γ as 0.1, and K as 500.