Unsupervised Point Cloud Completion and Segmentation by Generative Adversarial Autoencoding Network
Authors: Changfeng Ma, Yang Yang, Jie Guo, Fei Pan, Chongjun Wang, Yanwen Guo
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and visualization demonstrate our superiority, generalization and robustness. Comparisons against the previous method show that our method achieves the state-of-the-art performance on unsupervised point cloud completion and segmentation on real data. |
| Researcher Affiliation | Academia | Changfeng Ma Nanjing University changfengma@smail.nju.edu.cn Yang Yang Nanjing University yyang_nju@outlook.com Jie Guo Nanjing University guojie@nju.edu.cn Fei Pan Nanjing University panfei@smail.nju.edu.cn Chongjun Wang Nanjing University chjwang@nju.edu.cn Yanwen Guo Nanjing University ywguo@nju.edu.cn |
| Pseudocode | No | The paper states in the ethics checklist that 'We include the pseudo code of our method in our supplementary material.', but the pseudocode is not present within the provided document content. |
| Open Source Code | No | We will make our dataset and codes public in the future. |
| Open Datasets | Yes | We first generate the complete point clouds by sampling CAD models of Shape Net [3] and Model Net [35], serving as the artificial data. We train our UGAAN on Scan Net with Shape Net as the artificial dataset... directly evaluate it on the other two real-scene datasets, including S3DIS and Scan Object NN [28]. |
| Dataset Splits | Yes | We select four categories to establish four datasets and randomly split the data into train, valid and test sets. |
| Hardware Specification | Yes | Training our UGAAN takes about 20 hours for convergence with a GTX 2080Ti GPU. |
| Software Dependencies | No | The paper states 'Our framework is implemented using Py Torch', but it does not specify the version number of PyTorch or any other software dependencies with their versions. |
| Experiment Setup | Yes | The point numbers n0 2 in Section 3 are set to 2048, 512 and 2048, and the weights α1 6 are set to 1, 100, 5, 100, 0.5 and 0.5. We train four categories separately with the Adam optimizer. The learning rate of the optimizer and the batch size for our network are set to 1.0 10 5 and 4, separately. We train the model for 240 epochs. |