BPNet: Bézier Primitive Segmentation on 3D Point Clouds
Authors: Rao Fu, Cheng Wen, Qian Li, Xiao Xiao, Pierre Alliez
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted extensive experiments on the synthetic ABC dataset and real-scan datasets to validate and compare our approach with different baseline methods. Experiments show superior performance over previous work in terms of segmentation, with a substantially faster inference speed. |
| Researcher Affiliation | Collaboration | Rao Fu1,2 , Cheng Wen3 , Qian Li1, , Xiao Xiao4 and Pierre Alliez1 1Inria, France 2Geometry Factory, France 3The University of Sydney, Australia 4Shanghai Jiao Tong University, P. R. China |
| Pseudocode | No | The paper describes the steps of its method within the main text, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The code is available at: https://github. com/bizerfr/BPNet. |
| Open Datasets | Yes | We evaluate our approach on the ABC dataset [Koch et al., 2019]. |
| Dataset Splits | No | The paper states 'Finally, we use 5,200 CAD models for training and 1,300 CAD models for testing.' which specifies training and testing sets, but does not explicitly mention a separate validation set or its size/proportion. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory) used for running the experiments. It only mentions inference time without hardware context. |
| Software Dependencies | No | The paper mentions using 'CGAL library [CGAL, 2009]' and 'Open Cascade library [Open Cascade, 2018]' for data pre-processing, but it does not provide specific version numbers for these or other core software dependencies like deep learning frameworks (e.g., PyTorch, TensorFlow) or Python. |
| Experiment Setup | Yes | The learning rate for the backbone, soft membership, and uv parameters is set to 10 3, while the learning rate for the degree probabilities and control points is set to 10 4. We set γ as 3.0 for the focal loss, and δpull as 0 and δpush as 2.0 for the embedding losses. We employ ADAM to train our network. The model is then trained using 150 epochs. |