BoW Pooling: A Plug-and-Play Unit for Feature Aggregation of Point Clouds
Authors: Xiang Zhang, Xiao Sun, Zhouhui Lian3403-3411
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that the proposed Bow pooling is efficient to improve the performance in point cloud classification, shape retrieval and segmentation tasks and outperforms other existing symmetric functions. |
| Researcher Affiliation | Collaboration | Xiang Zhang1*, Xiao Sun2,1*, Zhouhui Lian1 1Wangxuan Institute of Computer Technology, Peking University, Beijing, P.R. China 2Meituan {1801210733, lianzhouhui}@pku.edu.cn, sunxiao10@meituan.com |
| Pseudocode | No | The paper provides mathematical formulations and diagrams to explain the proposed method, but it does not include a distinct 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | The classification experiment is conducted on Model Net40 (Wu et al. 2015), which is composed of 12311 CAD models in 40 classes. We evaluate the effectiveness of the Bo W pooling in the shape retrieval task on the SHREC15 Non-rigid dataset (Lian and et al 2015). We also evaluate the effectiveness of the Bo W pooling through point cloud segmentation experiments on Shape Net part (Yi et al. 2016) and S3DIS (Armeni et al. 2016) datasets. |
| Dataset Splits | Yes | We use the same train-test split as Point Net. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | The experimental settings keep the same as the original approach in terms of hyper parameters and the optimizer. The adapted networks with our Bo W unit are trained for 400 epochs in total using the mixing dictionary update strategy. We also find that setting T to 50 is better than 30 or 70. For TLU, the percentage of points that are kept is set to 0.25, 0.5 or 0.75. The feature dimension of elements in the dictionary is set to 512, while the number of the elements differs. The overall accuracy does not show a great difference by setting the dictionary size from 256 to 3072, and reaches the highest value at the size of 1024. |