BiPointNet: Binary Neural Network for Point Clouds
Authors: Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Liu, Hao Su
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that Bi Point Net outperforms existing binarization methods by convincing margins, at the level even comparable with the full precision counterpart. We highlight that our techniques are generic, guaranteeing significant improvements on various fundamental tasks and mainstream backbones. Moreover, Bi Point Net gives an impressive 14.7 speedup and 18.9 storage saving on real-world resource-constrained devices. |
| Researcher Affiliation | Collaboration | 1State Key Lab of Software Development Environment, Beihang University 2Shen Yuan Honors College, Beihang University 3Sense Time Research 4University of California, San Diego |
| Pseudocode | Yes | Algorithm 1 Monte Carlo Simulation for EMA-max |
| Open Source Code | Yes | Our code is released at https://github.com/htqin/Bi Point Net. |
| Open Datasets | Yes | Model Net40 (Wu et al., 2015), part segmentation on Shape Net (Chang et al., 2015), and semantic segmentation on S3DIS (Armeni et al., 2016). |
| Dataset Splits | No | The paper mentions training epochs but does not specify validation dataset splits or how data was partitioned for validation. |
| Hardware Specification | Yes | We further implement our Bi Point Net on Raspberry Pi 4B with 1.5 GHz 64-bit quad-core ARM CPU Cortex-A72 and Raspberry Pi 3B with 1.2 GHz 64-bit quad-core ARM CPU Cortex-A53. |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify a version number or other software dependencies with their versions. |
| Experiment Setup | Yes | Following previous works, we train 200 epochs, 250 epochs, 128 epochs on point cloud classification, part segmentation, semantic segmentation respectively. To stably train the binarized models, we use a learning rate of 0.001 with Adam and Cosine Annealing learning rate decay for all binarized models on all three tasks. |