PUPS: Point Cloud Unified Panoptic Segmentation
Authors: Shihao Su, Jianyun Xu, Huanyu Wang, Zhenwei Miao, Xin Zhan, Dayang Hao, Xi Li
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate the effectiveness of our proposals, we conduct extensive experiments on two point cloud panoptic segmentation datasets. Our method ranks 1st on the leader board of Semantic KITTI (Behley, Milioto, and Stachniss 2020) and achieves state-of-the-art results on nu Scenes (Caesar et al. 2020). |
| Researcher Affiliation | Collaboration | 1College of Computer Science and Technology, Zhejiang University 2Alibaba Group 3Shanghai Institute for Advanced Study, Zhejiang University 4Shanghai AI Laboratory |
| Pseudocode | No | The paper does not include a dedicated pseudocode block or algorithm listing. |
| Open Source Code | No | The paper mentions that its implementation is based on MMDetection3D and provides a link to a competition leaderboard for results, but it does not explicitly provide a link to its own source code for the PUPS framework or state that it is open-source. |
| Open Datasets | Yes | Semantic KITTI proposes the first panoptic segmentation challenge on point cloud data. It contains 22 data sequences splited into 3 parts: 10 for training, 1 for validation and 11 for testing. (Behley, Milioto, and Stachniss 2020) nu Scenes is a large-scale dataset for autonomous driving, which contains Li DAR data of 1000 scenes. (Caesar et al. 2020) |
| Dataset Splits | Yes | Semantic KITTI... It contains 22 data sequences splited into 3 parts: 10 for training, 1 for validation and 11 for testing. nu Scenes... The 1000 scenes are divided into 3 parts: 750 for training, 100 scenes for validation and 150 scenes for testing. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, memory) used for running its experiments. |
| Software Dependencies | No | Our implementation is based on MMDetection3D (MMDetection3DContributors 2020). The paper mentions a software framework but does not specify its version number or other software dependencies with versions. |
| Experiment Setup | Yes | Specifically, we train our models for 80 epochs with a batch size of 4. The learning rate is set to 0.002 initially and decrease with a factor of 0.1 after 50 epochs. We adopt Adam W (Loshchilov and Hutter 2017) with a weight decay of 0.05 as our optimizer. ... Unless specified, the point feature dimension is set to 128 and the number of classifiers is set to 100. The number of refinement stages is 3. As for training, the losses are included in Equation 7 and the coefficients α, β, γ are set to 4, 1, 1 respectively. |