Distribution Guidance Network for Weakly Supervised Point Cloud Semantic Segmentation
Authors: Zhiyi Pan, Wei Gao, Shan Liu, Ge Li
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments validate the rationality and effectiveness of our distribution choice and network design. Consequently, DGNet achieves state-of-the-art performance under multiple datasets and various weakly supervised settings. |
| Researcher Affiliation | Collaboration | Zhiyi Pan SECE, Peking University Peng Cheng Laboratory panzhiyi@stu.pku.edu.cn Wei Gao SECE, Peking University gaowei262@pku.edu.cn Shan Liu Media Laboratory, Tencent shanl@tencent.com Ge Li SECE, Peking University geli@ece.pku.edu.cn |
| Pseudocode | Yes | Algorithm 1: soft-mo VMF Algorithm |
| Open Source Code | Yes | The relevant code and data will be open-sourced upon acceptance of the paper. |
| Open Datasets | Yes | Datasets. S3DIS [1] encompasses six indoor areas, constituting a total of 271 rooms with 13 categories. ... Scan Net V2 [11] offers a substantial collection of 1,513 scanned scenes ... Semantic KITTI [5] with 19 classes is also considered. |
| Dataset Splits | Yes | Area 5 within S3DIS serves as the validation set, while the remaining areas are allocated for network training. ... we utilize 1,201 scenes for training and 312 scenes for validation. ... Point cloud sequences 00 to 10 are used in training, with sequence 08 as the validation set. |
| Hardware Specification | Yes | In our implementation, the DGNet is trained with one NVIDIA V100 GPU on S3DIS, eight NVIDIA TESLA T4 GPUs on Scan Net V2, and one NVIDIA V100 GPU on Semantic KITTI. |
| Software Dependencies | No | Res GCN-28 in Deep GCN [29] and Point Ne Xt-l [43] are reimplemented as the segment backbones with Open Points library [43]. The paper mentions software components but does not provide specific version numbers. |
| Experiment Setup | Yes | For truncated cross-entropy loss, β = 0.8. The concentration constant κ = 10 and the iteration number t = 10. The distribution alignment branch is not activated in the first 50 epochs to stabilize the feature learning. |