Interpretable3D: An Ad-Hoc Interpretable Classifier for 3D Point Clouds
Authors: Tuo Feng, Ruijie Quan, Xiaohan Wang, Wenguan Wang, Yi Yang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of Interpretable3D on four popular point cloud models: DGCNN, Point Net2, Point MLP, and Point Ne Xt. Our Interpretable3D demonstrates comparable or superior performance compared to softmax-based black-box models in the tasks of 3D shape classification and part segmentation. Our experiments are conducted on three well-known public benchmarks (i.e., Model Net40 (Wu et al. 2015) and Scan Object NN (Uy et al. 2019) for shape classification and Shape Net Part (Yi et al. 2016) for part segmentation). |
| Researcher Affiliation | Academia | Tuo Feng1, Ruijie Quan2*, Xiaohan Wang2, Wenguan Wang2, Yi Yang2 1Re LER, AAII, University of Technology Sydney 2Re LER, CCAI, Zhejiang University feng.tuo@student.uts.edu.au, quanruij@hotmail.com, {wxh1996111, wenguanwang.ai}@gmail.com, yi.yang@uts.edu.au |
| Pseudocode | No | The paper describes the algorithm steps in text and diagrams (Figure 2) but does not provide a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Our code is released at: github.com/Feng Zicai/Interpretable3D. |
| Open Datasets | Yes | Our experiments are conducted on three well-known public benchmarks (i.e., Model Net40 (Wu et al. 2015) and Scan Object NN (Uy et al. 2019) for shape classification and Shape Net Part (Yi et al. 2016) for part segmentation). |
| Dataset Splits | No | The paper states 'The training and testing configurations follow the default settings of the respective methods mentioned above.' but does not explicitly provide specific training/validation/test split percentages or sample counts for the datasets used in its own text. |
| Hardware Specification | No | The paper does not explicitly describe any specific hardware used for running its experiments. |
| Software Dependencies | No | The paper mentions various models and backbones used (e.g., DGCNN, Point Net2, Point MLP, Point Ne Xt) but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | We empirically set S = 15. The momentum coefficient and µ in Eq. (8) are set as 0.999 and 0.4, following (He et al. 2020; Kohonen 1990, 2012). For shape classification, the model takes 1,024 points as input, and for part segmentation, it takes 2,048 points as input. |