Orthogonal Dictionary Guided Shape Completion Network for Point Cloud
Authors: Pingping Cai, Deja Scott, Xiaoguang Li, Song Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | extensive experiment results indicate that the proposed method can reconstruct point clouds with more details and outperform previous state-of-the-art counterparts. We conduct comprehensive experiments on three datasets and the results confirm the effectiveness of the proposed algorithm by outperforming previous SOTA counterparts. |
| Researcher Affiliation | Academia | Pingping Cai, Deja Scott, Xiaoguang Li, Song Wang University of South Carolina, USA {pcai,ds17,xl22}@email.sc.edu, songwang@cec.sc.edu |
| Pseudocode | No | The paper describes the proposed network architecture and its components but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The implementation code is available at https://github.com/corecai163/ODGNet. |
| Open Datasets | Yes | PCN: The PCN dataset is first introduced by Yuan et al. (2018) and contains pairs of partial and complete point clouds from 30,974 models of 8 categories collected from the Shape Net (Chang et al. 2015). Shape Net-55/34: The Shape Net-55/34 datasets, introduced in Poin Tr (Yu et al. 2021), are also derived from Shape Net (Chang et al. 2015). KITTI: Since the previous two datasets are synthetic data generated from CAD models or meshes, which might be different from real scanned point clouds, we also include the KITTI dataset (Geiger et al. 2013). |
| Dataset Splits | Yes | To maintain consistency with previous methods (Yuan et al. 2018; Xie et al. 2020; Xiang et al. 2021), we adopt the same train/test splitting strategy, comprising 28,974 training samples, 800 validation samples, and 1,200 testing samples. |
| Hardware Specification | Yes | The training is carried out on two Nvidia V100 32G GPUs. |
| Software Dependencies | No | The paper mentions 'Adam as an optimization function' but does not specify any software dependencies with version numbers (e.g., Python version, specific deep learning frameworks like PyTorch or TensorFlow). |
| Experiment Setup | Yes | To train the network from scratch, we set the total epochs to 400 with a batch size of 32 and use Adam as an optimization function with a learning rate of 0.0004 at the beginning and gradually decrease the learning rate by 0.8 for every 20 epochs. |