Investigate Indistinguishable Points in Semantic Segmentation of 3D Point Cloud
Authors: Mingye Xu, Zhipeng Zhou, Junhao Zhang, Yu Qiao3047-3055
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our IAF-Net achieves the comparable results with state-of-the-art performance on several popular 3D point cloud datasets e.g. S3DIS and Scan Net, and clearly outperform other methods on IPBM. Our code will be available at https://github.com/Mingye Xu/IAF-Net |
| Researcher Affiliation | Collaboration | 1Shen Zhen Key Lab of Computer Vision and Pattern Recognition, SIAT-Sense Time Joint Lab,Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences, China 3Shanghai AI Lab, Shanghai, China 4SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code will be available at https://github.com/Mingye Xu/IAF-Net |
| Open Datasets | Yes | Our IAF-Net achieves the comparable results with state-of-the-art performance on several popular 3D point cloud datasets e.g. S3DIS and Scan Net |
| Dataset Splits | Yes | Following (Boulch 2020), we report the results under two settings: testing on Area 5 and 6-fold cross validation. [...] During the training, 8,192 point samples are chosen, where no less than 2% voxels are occupied and at least 70% of the surface voxels have valid annotation. Points are sampled on-the-fly. All points in the testing set are used for evaluation |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | For training, we randomly select points in the considered point cloud, and extract all points in an infinite column centered on this point, where the column section is 2 meters. For each column, we randomly select 8192 points as the input points. [...] During the training, 8,192 point samples are chosen, where no less than 2% voxels are occupied and at least 70% of the surface voxels have valid annotation. Points are sampled on-the-fly. |