GAM: Gradient Attention Module of Optimization for Point Clouds Analysis
Authors: Haotian Hu, Fanyi Wang, Zhiwang Zhang, Yaonong Wang, Laifeng Hu, Yanhao Zhang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments were conducted on five benchmark datasets to demonstrate the effectiveness and generalization capability of the proposed GAM for 3D point cloud analysis. Especially on S3DIS dataset (Armeni et al. 2016), GAM achieves the best performance among current point-based models with m Io U/OA/m Acc of 74.4%/90.6%/83.2%, respectively. |
| Researcher Affiliation | Collaboration | Haotian Hu1, Fanyi Wang2*, Zhiwang Zhang3, Yaonong Wang1, Laifeng Hu1, Yanhao Zhang2 1 Zhejiang Leapmotor Technology CO., LTD. 2 OPPO Research Institute 3 The University of Sydney |
| Pseudocode | Yes | Algorithm 1: Gradient Attention Module |
| Open Source Code | No | The paper does not provide any explicit statement or link for open-source code. |
| Open Datasets | Yes | We conduct experiments on 3D semantic segmentation task using S3DIS dataset (Armeni et al. 2016), 3D shape classification task using Scan Object NN dataset (Uy et al. 2019) and Model Net40 dataset (Wu et al. 2015), 3D part segmentation task using Shape Net (Yi et al. 2016), and 3D object detection task on KITTI dataset (Geiger et al. 2013). |
| Dataset Splits | Yes | We report mean Inter-over-Union (m Io U), overall accuracy (OA) and mean accuracy (m Acc), and throughput(TP) for the 6-fold cross-validation of S3DIS dataset. ... Model Net40 dataset (Wu et al. 2015), which has 12311 CAD samples, including 9843 training samples and 2468 test samples. |
| Hardware Specification | Yes | Experiments are run on NVIDIA GTX 3090 GPU and AMD EPYC 7402 CPU. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | The same training strategy used in each baseline is employed in our experiments, except that only GAM is added in each downsampling layer of the model. And λ is set to 1. The number of channels of the two-layer MLP in GAM is set to (1,16), (16,1). |