PointAttN: You Only Need Attention for Point Cloud Completion

Authors: Jun Wang, Ying Cui, Dongyan Guo, Junxia Li, Qingshan Liu, Chunhua Shen

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that our Point Att N outperforms state-of-the-art methods on multiple challenging benchmarks.
Researcher Affiliation Academia Jun Wang1, Ying Cui1, Dongyan Guo1*, Junxia Li2, Qingshan Liu2, Chunhua Shen3 1Zhejiang University of Technology 2Nanjing University of Information Science and Technology 3Zhejiang University {wangj, cuiying, guodongyan}@zjut.edu.cn junxiali99@163.com, qsliu@nuist.edu.cn, chunhuashen@zju.edu.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/ohhhyeahhh/Point Att N
Open Datasets Yes To evaluate the effectiveness of our Point Att N, we conduct comprehensive experiments on multiple challenging benchmarks, including Completion3D (Tchapmi et al. 2019), PCN (Yuan et al. 2018), Shape Net-55/34(Yu et al. 2021) and KITTI (Geiger et al. 2013).
Dataset Splits Yes For fair comparison, we follow the common protocols of each dataset for training and testing. To align with previous works, we use the specified training set of Completion3D to train the model and take the L2 Chamfer distance (CD) as the metric. For a fair comparison, we follow the same split settings with PCN(Yuan et al. 2018) during experiments and take the L1 Chamfer Distance as the metric.
Hardware Specification Yes The proposed framework is implemented in Python with Py Torch and trained on 4 NVIDIA 2080Ti GPUs.
Software Dependencies No The paper states "implemented in Python with Py Torch" but does not provide specific version numbers for Python or PyTorch, which is required for reproducibility.
Experiment Setup Yes Models are trained with Adam optimizer by totally 400 epochs, while the learning rate is initialized to 1E-4 and decayed by 0.7 every 40 epochs. The batch size is set to 32.