PointINet: Point Cloud Frame Interpolation Network
Authors: Fan Lu, Guang Chen, Sanqing Qu, Zhijun Li, Yinlong Liu, Alois Knoll2251-2259
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We design both quantitative and qualitative experiments to evaluate the performance of the point cloud frame interpolation method and extensive experiments on two large scale outdoor Li DAR datasets demonstrate the effectiveness of the proposed Point INet. |
| Researcher Affiliation | Academia | Fan Lu,1 Guang Chen,1* Sanqing Qu,1 Zhijun Li,2 Yinlong Liu,3 Alois Knoll3 1 Tongji University 2 University of Science and Technology of China 3 Technische Universit at M unchen |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm', nor does it present structured code-like blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/ispc-lab/Point INet.git. |
| Open Datasets | Yes | We evaluate the proposed method on two large scale outdoor Li DAR datasets, namely KITTI odometry dataset (Geiger, Lenz, and Urtasun 2012) and nu Scenes dataset (Caesar et al. 2020). |
| Dataset Splits | Yes | KITTI odometry dataset provides 11 sequences with ground truth (00-10) and we use sequence 00 to train the network , 01 to validate and the others to evaluate. |
| Hardware Specification | Yes | The efficiency of the proposed Point INet is evaluated on a PC with NVIDIA Geforce RTX 2060 |
| Software Dependencies | No | The paper mentions 'All of the network is implemented using Py Torch (Paszke et al. 2019)', but it does not specify the version number for PyTorch or any other software dependencies crucial for replication. |
| Experiment Setup | Yes | We randomly downsample the point clouds to 16384 points during training and the number of neighbor points K is set to 32 in our implementation. The channels of the layers of Shared-MLP in attentive points fusion module are set to [64, 64, 128]. ... Adam is used as the optimizer. |