InfoCD: A Contrastive Chamfer Distance Loss for Point Cloud Completion

Authors: Fangzhou Lin, Yun Yue, Ziming Zhang, Songlin Hou, Kazunori Yamada, Vijaya Kolachalama, Venkatesh Saligrama

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments for point cloud completion using Info CD and observe significant improvements consistently over all the popular baseline networks trained with CD-based losses, leading to new state-of-the-art results on several benchmark datasets.
Researcher Affiliation Collaboration Fangzhou Lin1,2 Yun Yue1 Ziming Zhang1 Songlin Hou1,3 Kazunori D Yamada2 Vijaya B Kolachalama4 Venkatesh Saligrama4 1Worcester Polytechnic Institute, USA 2Tohoku University, Japan 3Dell Technologies, USA 4Boston University, USA {flin2, yyue, zzhang15, shou}@wpi.edu, yamada@tohoku.ac.jp, {vkola, srv}@bu.edu
Pseudocode No The paper does not contain any sections or figures explicitly labeled as "Pseudocode" or "Algorithm".
Open Source Code Yes Demo code is available at https://github.com/Zhang-VISLab/Neur IPS2023-Info CD.
Open Datasets Yes Datasets. We conducted experiments for point cloud completion on the following datasets: PCN [10]: This is a subset of Shape Net [66]... Multi-view partial point cloud (MVP) [67]: This dataset covers 16 categories... Shape Net-55/34 [16]: Shape Net-55 contains 55 categories... Shape Net-Part [65]: This is a subset of Shape Net Core [66]...
Dataset Splits No The paper provides training and testing splits for several datasets (MVP, Shape Net-55, Shape Net-34), but does not explicitly mention or detail a separate validation split for reproducibility.
Hardware Specification Yes We conducted our experiments on a server with 4 NVIDIA A100 80G GPUs and one with 10 NVIDIA Quadro RTX 6000 24G GPUs due to the large model sizes of some baseline networks.
Software Dependencies No The paper mentions using "Py Torch" and optimizers "Adam [74] or Adam W [75]" but does not provide specific version numbers for these or any other software components, which is necessary for reproducible ancillary software details.
Experiment Setup Yes Hyperparameters such as learning rates, batch sizes and balance factors in the original losses for training baseline networks were kept consistent with the baseline settings for fair comparisons. Hyperparameter τ in Info CD was tuned based on grid search, while λ was set to 10 7 for all the experiments.