Superpoint Transformer for 3D Scene Instance Segmentation
Authors: Jiahao Sun, Chunmei Qing, Junpeng Tan, Xiangmin Xu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on Scan Netv2 and S3DIS benchmarks verify that our method is concise yet efficient. |
| Researcher Affiliation | Academia | 1 School of Electronic and Information Engineering, South China University of Technology, China 2 School of Future Technology, South China University of Technology, China |
| Pseudocode | No | The paper describes the model architecture and process in detail but does not include formal pseudocode blocks or algorithms. |
| Open Source Code | Yes | Code is available at https://github.com/sunjiahao1999/SPFormer. |
| Open Datasets | Yes | Experiments are conducted on Scan Netv2 (Dai et al. 2017) and S3DIS (Armeni et al. 2016) datasets. |
| Dataset Splits | Yes | Scan Netv2 has a total of 1613 indoor scenes, of which 1201 are used for training, 312 for validation, and 100 for testing. |
| Hardware Specification | Yes | The runtime is measured on the same RTX 3090 GPU. |
| Software Dependencies | Yes | For a fair comparison, the SSC and SC layers in all the above methods are implemented by spconv v2.1. |
| Experiment Setup | Yes | In our experiments, we set λcls = 0.5, λmask = 1. Empirically, we set τ to 0.5. Empirically, we set βcls = βs = 0.5, βmask = 1. Table 7 presents the selection of the number of query vectors and transformer decoder layers. |