Weakly Supervised 3D Segmentation via Receptive-Driven Pseudo Label Consistency and Structural Consistency

Authors: Yuxiang Lan, Yachao Zhang, Yanyun Qu, Cong Wang, Chengyang Li, Jia Cai, Yuan Xie, Zongze Wu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experimental results on three challenging datasets demonstrate that our method significantly outperforms state-of-the-art weakly supervised methods and even achieves comparable performance to the fully supervised methods.
Researcher Affiliation Collaboration Yuxiang Lan1 , Yachao Zhang1 , Yanyun Qu 1 , Cong Wang2 Chengyang Li 3, Jia Cai 3 Yuan Xie 3 , Zongze Wu4 1School of Informatics, Xiamen University, Fujian, China 2Huawei Technologies, Shanghai, China 3School of Computer Science and Technology, East China Normal University, Shanghai, China 4 School of Mechatronics and Control Engineering, Shenzhen University, Guangdong, China
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks. The figures are diagrams of the framework and processes.
Open Source Code No The paper does not provide any explicit statement about releasing open-source code or a link to a code repository for the described methodology.
Open Datasets Yes To thoroughly evaluate our RPSC, we adopt three challenging large-scale point cloud benchmarks: S3DIS (Armeni et al. 2016), Scan Net-v2 (Dai et al. 2017) and Semantic KITTI (Behley et al. 2019).
Dataset Splits No The paper mentions using 'all points of the original test set' but does not explicitly provide details about the training and validation dataset splits, such as specific percentages, sample counts, or references to predefined splits for these subsets.
Hardware Specification Yes Our framework uses Rand LA-Net (Hu et al. 2020) as backbone and it is trained on a single NVIDIA Tesla T4 with Tensorflow 1.14.
Software Dependencies Yes Our framework uses Rand LA-Net (Hu et al. 2020) as backbone and it is trained on a single NVIDIA Tesla T4 with Tensorflow 1.14.
Experiment Setup Yes The Adam Optimizer is adopted for training with an initial learning rate of 0.01 and momentum of 0.9. We first pre-train our network for 100 epoches using labeled points. Then we perform 10 iterations of training. In each training iteration, we train our network for 30 epoches by Ltotal in Eq. (14)... In all experiments, we set the hyperparameters δ = 0.8, ϵ = 0.9 and α = 0.1 empirically, the scalar hyperparameters λscore = 1.5, λssc = 0.75 and λrsc = 0.1 are selected through experiments... while the batch size is kept fixed to 8 in all dataset experiments.