Arbitrary-Scale Point Cloud Upsampling by Voxel-Based Network with Latent Geometric-Consistent Learning
Authors: Hang Du, Xuejun Yan, Jingjing Wang, Di Xie, Shiliang Pu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments indicate the proposed approach outperforms the stateof-the-art approaches not only in terms of fixed upsampling rates but also for arbitrary-scale upsampling. |
| Researcher Affiliation | Industry | Hikvision Research Institute, Hangzhou, China |
| Pseudocode | No | The paper describes the methods in prose and includes figures (e.g., Figure 2 for overview), but it does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/hikvision-research/3DVision |
| Open Datasets | Yes | To make the experiments reproducible, we utilize two public datasets with their settings directly, including PU-GAN (Li et al. 2019) and PU1K (Qian et al. 2021a). In addition, we also employ a real-scanned dataset, i.e., Scan Object NN (Uy et al. 2019), for qualitative evaluation. |
| Dataset Splits | No | The paper mentions 'Training details' and 'Evaluation' sections, and specifies using 'input test point clouds' for evaluation. However, it does not explicitly provide details about a distinct validation dataset split or its size/percentage. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or their version numbers, such as programming languages or library versions. |
| Experiment Setup | Yes | Our models are trained by 100 epochs with a batch size of 64 on PU1K dataset, and a batch size of 32 on PU-GAN dataset. The learning rate begins at 0.001 and drops by a decay rate of 0.7 every 50k iterations. ... For loss balanced weights, we empirically set λ1 = 300, λ2 = 0.01, λ3 = 0.3, λ4 = 100, λ5 = 1e10. The resampling rate is 4, and k is 16 in surface patches. |