Point Cloud Part Editing: Segmentation, Generation, Assembly, and Selection
Authors: Kaiyi Zhang, Yang Chen, Ximing Yang, Weizhong Zhang, Cheng Jin
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Considerable experiments on different datasets demonstrate the efficiency and effectiveness of SGAS on point cloud part editing. In addition, SGAS can be pruned to realize unsupervised part-aware point cloud generation and achieves state-of-the-art results. |
| Researcher Affiliation | Academia | Kaiyi Zhang1, Yang Chen1, Ximing Yang1, Weizhong Zhang1,2, Cheng Jin1,2 1School of Computer Science, Fudan University, Shanghai, China 2Innovation Center of Calligraphy and Painting Creation Technology, MCT, China {zhangky20, chen yang19, xmyang19, weizhongzhang, jc}@fudan.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a specific repository link or explicitly state that the source code for the methodology is available. |
| Open Datasets | Yes | We evaluate SGAS on Part Net (Mo et al. 2019b) dataset. Following previous works (Gal et al. 2021; Li, Liu, and Walder 2022), we perform unsupervised part-aware point cloud generation on Shape Net-Partseg (Yi et al. 2016) dataset. |
| Dataset Splits | No | The paper mentions using datasets for training and testing, but does not provide specific details on dataset splits (e.g., exact percentages or sample counts for training, validation, and test sets) needed to reproduce the data partitioning. |
| Hardware Specification | Yes | All the experiments are performed on a single NVIDIA TITAN Xp for 2000 epochs with a batch size of 200. |
| Software Dependencies | No | The paper mentions "Adam optimizers are used for SGAS" but does not specify version numbers for any software components, libraries, or solvers needed for replication. |
| Experiment Setup | Yes | Adam optimizers are used for SGAS with a learning rate of α = 0.0005, coefficients β1 = 0.5 and β2 = 0.99. All the experiments are performed on a single NVIDIA TITAN Xp for 2000 epochs with a batch size of 200. In loss functions, α and β are set to 1 and 1. λgp is set to 10. The threshold in Part Select is set to 0.5. We update the discriminator 5 times for each update of the generator. |