PointLIE: Locally Invertible Embedding for Point Cloud Sampling and Recovery

Authors: Weibing Zhao, Xu Yan, Jiantao Gao, Ruimao Zhang, Jiayan Zhang, Zhen Li, Song Wu, Shuguang Cui

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that Point LIE outperforms the state-of-the-art point cloud sampling and upsampling methods both quantitatively and qualitatively. To quantitatively evaluate the performance of different methods, three commonly-used metrics are adopted, i.e., Chamfer distance (CD), Hausdorff distance (HD) and point-to-surface distance (P2F).
Researcher Affiliation Academia Weibing Zhao1,2 , Xu Yan1,2 , Jiantao Gao2,3, Ruimao Zhang1,2, Jiayan Zhang1, Zhen Li1,2 , Song Wu4, Shuguang Cui1,2 1 The Chinese University of Hong Kong, Shenzhen 2 Shenzhen Research Institute of Big Data 3 Shanghai University 4 Shenzhen Luohu Hospital {weibingzhao@link., xuyan1@link., lizhen@}cuhk.edu.cn
Pseudocode No The paper describes the model architecture and processes using figures and mathematical equations (e.g., Eq. 3, 4, 5, 6) but does not provide structured pseudocode or an algorithm block.
Open Source Code No No explicit statement about open-sourcing code or a link to a code repository is provided.
Open Datasets Yes To fully evaluate the proposed Point LIE, we compared our method with the state-of-the-art methods on PU-147 [Li et al., 2019] dataset, which follows the official split of 120/27 for our training and testing sets.
Dataset Splits No The paper mentions 'PU-147 [Li et al., 2019] dataset, which follows the official split of 120/27 for our training and testing sets.' but does not explicitly detail the exact percentages or counts for training, validation, and test splits, nor does it explicitly mention a validation set.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory, or cloud instances) used for running experiments are mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA, or library versions) are mentioned in the paper.
Experiment Setup Yes Under the premise of balancing efficiency and effectiveness, we set PI block number M as 8 in the 4 scale task, and M as 4 in the rest 8 and 16 tasks. Furthermore, we set k as 3 to ensure that the information in the discarded points can be sufficiently preserved.