Balanced Chamfer Distance as a Comprehensive Metric for Point Cloud Completion
Authors: Tong Wu, Liang Pan, Junzhe Zhang, Tai WANG, Ziwei Liu, Dahua Lin
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive investigations are provided for the comparison among different metrics and methods for the task of point cloud completion. Experimental results validate that the proposed metric, Density-aware Chamfer Distance, successfully overcomes the aforementioned issues of CD. Our work is implemented with Py Torch and is run on a Tesla V100 GPU. All the models are trained using the Adam optimizer [10] with the learning rate initialized at 1e-4 and decayed by 0.7 every 40 epochs. We use a batch size of 32 and a total epoch of 80. |
| Researcher Affiliation | Collaboration | Tong Wu1, Liang Pan2, Junzhe Zhang2,4, Tai Wang1,3, Ziwei Liu2, Dahua Lin1,3,5 1Sense Time-CUHK Joint Lab, The Chinese University of Hong Kong, 2S-Lab, Nanyang Technological University, 3Shanghai AI Laboratory, 4Sense Time Research, 5Centre of Perceptual and Interactive Intelligence |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | Our code will be available at https://github.com/wutong16/Density_aware_Chamfer_Distance. |
| Open Datasets | Yes | We use the recently proposed MVP Dataset [17] for our study and experiments. |
| Dataset Splits | No | It is a multi-view partial point cloud dataset covering 16 categories with 62,400 and 41,600 pairs for training and testing, respectively. |
| Hardware Specification | Yes | Our work is implemented with Py Torch and is run on a Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify its version number or any other software dependencies with specific version numbers. |
| Experiment Setup | Yes | All the models are trained using the Adam optimizer [10] with the learning rate initialized at 1e-4 and decayed by 0.7 every 40 epochs. We use a batch size of 32 and a total epoch of 80. We set α = 1000 for the evaluation of DCD, and α [40, 100] for training. We set λ [0, 0.5] and β = 9, γ = 1 for our approach in the main experiments. |