SSPC-Net: Semi-supervised Semantic 3D Point Cloud Segmentation Network

Authors: Mingmei Cheng, Le Hui, Jin Xie, Jian Yang1140-1147

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various datasets demonstrate that our semisupervised segmentation method can achieve better performance than the current semi-supervised segmentation method with fewer annotated 3D points.
Researcher Affiliation Academia PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education Jiangsu Key Lab of Image and Video Understanding for Social Security School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
Pseudocode Yes Algorithm 1: Graph-based supervision extension; Algorithm 2: Superpoint dropout strategy
Open Source Code No The paper does not provide an explicit statement or link for open-source code.
Open Datasets Yes For the S3DIS (Armeni et al. 2016), Scan Net (Dai et al. 2017) and v KITTI (Gaidon et al. 2016) dataset, we employ the mini-batch size of 4, 8, 8, respectively.
Dataset Splits Yes Scan Net: We split the dataset into a training set with 1201 scenes and a testing set with 312 scenes following (Qi et al. 2017b). v KITTI: For evaluation, we split the dataset into 6 non-overlapping sub-sequences and employ 6-fold cross validation following (Ye et al. 2018).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper mentions 'Adam optimizer' but does not specify software dependencies with version numbers (e.g., PyTorch version, Python version, specific libraries or solvers).
Experiment Setup Yes To train our model, we adopt Adam optimizer with a base learning rate of 0.01. For the S3DIS (Armeni et al. 2016), Scan Net (Dai et al. 2017) and v KITTI (Gaidon et al. 2016) dataset, we employ the mini-batch size of 4, 8, 8, respectively. We empirically implement the dynamic label propagation module every M = 40 epochs.