Joint Multi-view 2D Convolutional Neural Networks for 3D Object Classification
Authors: Jinglin Xu, Xiangsen Zhang, Wenbin Li, Xinwang Liu, Junwei Han
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that the proposed method is able to outperform current state-of-the-art methods on 3D object classification. |
| Researcher Affiliation | Academia | 1Northwestern Polytechnical University, Xi an, China 2Nanjing University, Nanjing, China 3National University of Defense Technology, Changsha, China |
| Pseudocode | No | The paper does not contain a pseudocode block or a clearly labeled algorithm block. |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the methodology. |
| Open Datasets | Yes | Model Net40 [Wu et al., 2015] provided on the Princeton Model Net website 1 is a subset of the Model Net and has 12311 models from 40 common categories. 1http://modelnet.cs.princeton.edu/ |
| Dataset Splits | Yes | For the classification task, all the works are discussed on the Model Net40, referring to [Su et al., 2015] to conduct the training/testing split. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used (e.g., GPU/CPU models, memory specifications) to run its experiments. |
| Software Dependencies | No | The paper mentions software components like "Res Net-18" and "Adam" but does not specify version numbers for programming languages or libraries. |
| Experiment Setup | Yes | For our proposed method, we fine-tune the parameters of Res Net-18 using the Model Net40 dataset and use Adam with learning rate=5 10 6, β1 =0.9, β2 =0.999, weight decay= 0.001, batch size = 8, epoch = 30 for optimization. Furthermore, there are two parameters s and γ in the proposed method, where s denotes the number of nonzero elements in α and γ is the power exponent of each element of α. For one thing, we tune s in the range of [6, 12] with step 1 to select a few discriminative and informative views to make a joint decision during classification. For another thing, we vary γ from 1.5 to 10 with a step of 1 to explore the influence on different values of γ on classification accuracy. |