Parametric Surface Constrained Upsampler Network for Point Cloud
Authors: Pingping Cai, Zhenyao Wu, Xinyi Wu, Song Wang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The state-of-the-art experimental results on both tasks demonstrate the effectiveness of the proposed method. To validate the effectiveness of the proposed upsampler, we first evaluate it on the PU1K dataset and conduct ablation studies to verify the effectiveness of our network. |
| Researcher Affiliation | Academia | University of South Carolina, USA {pcai,zhenyao,xinyiw}@email.sc.edu, songwang@cec.sc.edu |
| Pseudocode | No | The paper describes the network design and processes in text and diagrams (Figure 3 and 4), but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The implementation code will be available at https://github.com/corecai163/PSCU. |
| Open Datasets | Yes | PU1K: The PU1K dataset is first introduced in PU-GCN (Qian et al. 2021) for point cloud upsampling. KITTI: We also apply our method to the real collected LiDAR point cloud KITTI dataset (Geiger et al. 2013). Shape Net-PCN: The Shape Net-PCN dataset is introduced by (Yuan et al. 2018), which is derived from Shape Net (Chang et al. 2015). |
| Dataset Splits | No | The paper states train/test splits for PU1K ('1,020 training samples and 127 testing samples') and Shape Net-PCN ('29,774 training samples and 1,200 testing samples') but does not explicitly detail a validation dataset split. |
| Hardware Specification | Yes | To train this network, we use 2 Tesla V100 GPUs. |
| Software Dependencies | No | The paper mentions using Adam as the optimization function but does not provide specific version numbers for software libraries, frameworks (e.g., TensorFlow, PyTorch), or other dependencies. |
| Experiment Setup | Yes | To train this network, we set the batch size to 16 and the total epoch number to 150. Besides, we use Adam as the optimization function with a learning rate of 0.0005 at the beginning, and we decrease the learning rate by a factor of 0.5 every 50 epochs. We use 4 Tesla V100 GPUs with a batch size of 32 and a total epoch number of 500. Similar to Snow Flake Net, we use Adam as the optimization function with warm-up settings, where it first takes 200 steps to warm up the learning rate from 0 to 0.0005, and then the learning rate decays by a factor of 0.5 for every 50 epochs. |