TopoSeg: Topology-aware Segmentation for Point Clouds
Authors: Weiquan Liu, Hanyun Guo, Weini Zhang, Yu Zang, Cheng Wang, Jonathan Li
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our proposed Topo Seg module can be easily embedded into the point cloud segmentation network and improve the segmentation performance. |
| Researcher Affiliation | Academia | 1Fujian Key Laboratory of Sensing and Computing for Smart Cities, School of Informatics, Xiamen University, Xiamen, China 2Departments of Geography and Environmental Management and Systems Design Engineering, University of Waterloo, Waterloo, Canada |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. It describes the mathematical formulations and steps in paragraph form. |
| Open Source Code | No | The paper does not provide any specific links or explicit statements about releasing source code for the described methodology. |
| Open Datasets | Yes | This group of experiments are conducted on the large-scale public dataset of 3D shapes, Shape Net [Chang et al., 2015]... This group of experiments are conducted on the dataset established in [Yu et al., 2018], which contains 24 CAD models and 12 daily object models with sharp edges. |
| Dataset Splits | No | The paper does not explicitly provide specific training/validation/test dataset splits (e.g., percentages or sample counts for each split). While it mentions using ShapeNet and a dataset from Yu et al., it does not detail how these datasets were partitioned for training, validation, and testing. |
| Hardware Specification | Yes | In this work, all the experiments are conducted in Linux with a Geforce RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch framework' and 'Dionysus package' but does not specify their version numbers, which are required for a reproducible description of software dependencies. |
| Experiment Setup | Yes | Adam optimizer with initial learning rate of 0.001 is used for training, and the learning rate is reduced by half every 20 epochs. The batch size and total training epochs is set to 64 and 200, respectively. In the loss function Eq.(6), the weight of topology loss λ is set to 0.001. ... With the network and training settings, we use Adam optimizer with initial learning rate of 0.001, and the learning rate is reduced by half every 10 epochs. The batch size, total training epochs and the weight of topology loss λ are set to 64, 200 and 0.0005, respectively. |