JSNet: Joint Instance and Semantic Segmentation of 3D Point Clouds
Authors: Lin Zhao, Wenbing Tao12951-12958
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As a result, we evaluate the proposed JSNet on a large-scale 3D indoor point cloud dataset S3DIS and a part dataset Shape Net, and compare it with existing approaches. Experimental results demonstrate our approach outperforms the state-of-the-art method in 3D instance segmentation with a significant improvement in 3D semantic prediction and our method is also beneficial for part segmentation. |
| Researcher Affiliation | Academia | National Key Laboratory of Science and Technology on Multispectral Information Processing School of Artifical Intelligence and Automation, Huazhong University of Science and Technology, China {linzhao, wenbingtao}@hust.edu.cn |
| Pseudocode | No | The paper includes network diagrams and mathematical formulations but does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | The source code for this work is available at https://github.com/dlinzhao/JSNet. |
| Open Datasets | Yes | We evaluate our approach on the following two public datasets: Stanford Large-Scale 3D Indoor Spaces (S3DIS) (Armeni et al. 2016) and Shape Net (Yi et al. 2016). |
| Dataset Splits | Yes | For a principled evaluation, we follow the same k-fold cross validation as in (Qi et al. 2017a), and we also present the results of the 5-th fold (Area 5) following (Tchapmi et al. 2017)... For the large scale dataset S3DIS, each point in our model is represented by a 9-dim vector (XYZ, RGB and normalized location as to the room). Following experimental settings in Point Net (Qi et al. 2017a), we split the rooms into overlapped blocks of area 1m 1m, and each block contains 4096 points. |
| Hardware Specification | Yes | We train the network for 100 epochs with batch size 24 on a single NVIDIA GTX1080Ti. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | During training process, we configure the network with δv = 0.5, δd = 1.5 and K = 5, where K is the dimension of the embedding. We train the network for 100 epochs with batch size 24 on a single NVIDIA GTX1080Ti. We use Adam optimizer to optimize the network with momentum set to 0.9, base learning rate set to 0.001, and decay by 0.5 every 12.5k iterations. |