A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion

Authors: Zhaoyang Lyu, Zhifeng Kong, Xudong XU, Liang Pan, Dahua Lin

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on various benchmark datasets show that our PDR paradigm outperforms previous state-of-the-art methods for point cloud completion.
Researcher Affiliation Collaboration 1CUHK-Sense Time Joint Lab, The Chinese University of Hong Kong 2Shanghai AI Laboratory 3University of California, San Diego 4S-Lab, Nanyang Technological University
Pseudocode No The paper describes methods and processes in text and diagrams but does not include formal pseudocode blocks or algorithms.
Open Source Code Yes Code is released at https://github.com/Zhaoyang Lyu/Point_ Diffusion_Refinement.
Open Datasets Yes MVP. The MVP dataset (Pan et al., 2021) has 62400 training partial-complete point cloud pairs and 41600 testing pairs sampled from Shape Net (Chang et al., 2015). MVP-40. The MVP-40 dataset (Pan et al., 2021) consists of 41600 training samples and 64168 testing samples from 40 categories in Model Net40 (Wu et al., 2015). Completion3D. It (Tchapmi et al., 2019) consists of 28974 point cloud pairs for training and 1184 for testing from 8 object categories in Shape Net.
Dataset Splits No The paper specifies training and testing sets but does not explicitly define a separate validation dataset split with proportions or counts. It mentions evaluating on training and test sets during checkpoint selection, which serves a similar purpose, but not a distinct validation split.
Hardware Specification Yes We also report the average generation time of a single point cloud evaluated on one NVIDIA GEFORCE RTX 2080 Ti GPU for DDPM of different reverse steps.
Software Dependencies No The paper mentions software like 'Adam optimizer' and the use of 'python package scipy.optimize.fmin' but does not specify version numbers for these software components or their dependencies (e.g., specific Python or PyTorch versions).
Experiment Setup Yes In all experiments, we use the Adam optimizer with a learning rate of 2 10 4. For experiments of our PDR paradigm in Table 1 and Table 2 in the main text, we use data augmentation described in Appendix Section B.3. We train our Conditional Generation Network for 340 epochs, 200 epochs, and 500 epochs on the MVP, MVP-40, and Completion3D datasets, respectively. We train the Refinement Network for 100 epochs, 150 epochs and 200 epochs on the MVP, MVP-40, and Completion3D datasets, respectively.