Geodesic Self-Attention for 3D Point Clouds
Authors: Zhengyu Li, XUAN TANG, Zihao Xu, Xihao Wang, Hui Yu, Mingsong Chen, xian wei
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we comprehensively evaluate the performance of Point-GT on several benchmarks, including the classification task, part segmentation and few-shot classification. In the ablation studies, we qualitatively and quantitatively assess the effectiveness of GSA. |
| Researcher Affiliation | Academia | 1East China Normal University 2Technical University of Munich 3FJIRSM, Chinese Academy of Sciences |
| Pseudocode | Yes | Algorithm 1 Graph-based Geodesic Distance Score |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] We have uploaded our core code in the supplemental material. |
| Open Datasets | Yes | Model Net40 [40], one of the most popular 3D object classification datasets, contains 12,311 synthesized CAD models from 40 categories. |
| Dataset Splits | Yes | We split the dataset into 9,843 training and 2,468 validation instances following the previous standard practice. |
| Hardware Specification | No | The paper states, 'Each computation of the adjacency graph is implemented by a single CUDA kernel and achieves more than a hundred times faster computation.' and confirms in the checklist that compute resources were included, however, it does not specify concrete hardware details such as GPU models (e.g., NVIDIA A100, RTX 3090) or CPU types. |
| Software Dependencies | No | The paper mentions 'Numba [18] to implement a Compute Unified Device Architecture (CUDA) operator', but it does not specify version numbers for Numba, CUDA, or any other software dependencies. |
| Experiment Setup | Yes | We split the dataset into 9,843 training and 2,468 validation instances following the previous standard practice. Standard random scaling and random translation are applied for data augmentation during the training. More experiment details are provided in supplementary materials. |