Morphing and Sampling Network for Dense Point Cloud Completion

Authors: Minghua Liu, Lu Sheng, Sheng Yang, Jing Shao, Shi-Min Hu11596-11603

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments verify the effectiveness of our method and demonstrate that it outperforms the existing methods in both the Earth Mover s Distance (EMD) and the Chamfer Distance (CD).
Researcher Affiliation Collaboration Minghua Liu,1 Lu Sheng,2 Sheng Yang,3 Jing Shao,4 Shi-Min Hu3 1UC San Diego, 2Beihang University, 3Tsinghua University, 4Sensetime
Pseudocode No The paper describes methods and processes in paragraph text and mathematical formulas but does not include any structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement about releasing source code for the methodology described, nor does it provide a direct link to a code repository or mention code availability in supplementary materials.
Open Datasets Yes We evaluate our methods on the Shape Net dataset (Chang et al. 2015).
Dataset Splits Yes For a fair comparison, the train/test split is the same as in PCN (Yuan et al. 2018). We generate 50 pairs of partial and complete point clouds for each of the CAD model, resulting in 30,974 50 pairs of point clouds for training and test.
Hardware Specification No We trained our models on 8 Nvidia GPUs for 2.3 105 iterations (i.e., 25 epochs) with a batch size of 160.
Software Dependencies No The paper mentions software components like 'Adam' (optimizer) and 'Re LU' (activation functions) but does not provide specific software names with version numbers, such as 'PyTorch 1.9' or 'TensorFlow 2.x'.
Experiment Setup Yes We trained our models on 8 Nvidia GPUs for 2.3 105 iterations (i.e., 25 epochs) with a batch size of 160. The initial learning rate is 1e-3 and is decayed by 0.1 per 10 epochs. Adam is used as the optimizer.