Unified 3D Segmenter As Prototypical Classifiers
Authors: Zheyun Qin, Cheng Han, Qifan Wang, Xiushan Nie, Yilong Yin, Lu Xiankai
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results demonstrate that PROTOSEG outperforms concurrent well-known specialized architectures on 3D point cloud benchmarks, achieving 72.3%, 76.4% and 74.2% m Io U for semantic segmentation on S3DIS, Scan Net V2 and Semantic KITTI, 66.8% m Cov and 51.2% m AP for instance segmentation on S3DIS and Scan Net V2, 62.4% PQ for panoptic segmentation on Semantic KITTI, validating the strength of our concept and the effectiveness of our algorithm. |
| Researcher Affiliation | Collaboration | Zheyun Qin1 , Cheng Han2 , Qifan Wang3, Nie Xiushan4, Yilong Yin1 , Xiankai Lu1 1Shandong University, 2Rochester Institute of Technology, 3Meta AI, 4Shandong Jianzhu University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and models are available at https://github.com/zyqin19/PROTOSEG. |
| Open Datasets | Yes | S3DIS [9], a large-scale indoor point cloud dataset, encompasses point clouds from 271 rooms across 6 areas." "Scan Net V2 [10] provides over 1,500 indoor scenes and around 2.5 million annotated RGB-D images..." "Semantic KITTI [11] is introduced based on the well-known KITTI Vision [71] benchmark... |
| Dataset Splits | Yes | Table 2: Comparisons of semantic segmentation with m Io U on Scan Net v2 [10] (see 5.1). Method Test Val. and Table 3: Comparisons of semantic segmentation performance on Semantic KITTI val set (see 5.1). |
| Hardware Specification | Yes | Training and testing are conducted on eight NVIDIA A100 GPUs. |
| Software Dependencies | No | The paper does not explicitly provide specific software dependencies with version numbers, such as Python library versions or framework versions. |
| Experiment Setup | Yes | The hyper-parameter κ balances the convergence speed and stability of Eq. 10 in addition to smoothing the association (Eq. 9). We just use κ = 0.05 following [30] for our experiments, not extensively fine-tuned." and "Our model achieves the best performance when the momentum coefficient is set to 0.999." and "The m Io U score peaks at 72.34% when K = 10." |