Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline
Authors: Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, Jia Deng
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study the key ingredients of this progress and uncover two critical results. First, we find that auxiliary factors like different evaluation schemes, data augmentation strategies, and loss functions, which are independent of the model architecture, make a large difference in performance. ... Second, a very simple projection-based method, which we refer to as Simple View, performs surprisingly well. It achieves on par or better results than sophisticated state-of-the-art methods on Model Net40 while being half the size of Point Net++. It also outperforms state-of-the-art methods on Scan Object NN, a real-world point cloud benchmark, and demonstrates better cross-dataset generalization. Table 2. Performance of various architectures on Model Net40. Protocol affects performance by a large amount. Simple View performs on par or better than prior architectures across protocols. Table 4. Performance of various architectures on Model Net40 when using the best data-augmentation and loss function; and not using any feedback from test set. |
| Researcher Affiliation | Academia | 1Department of Computer Science, Princeton University, NJ, USA. Correspondence to: Ankit Goyal <agoyal@priceton.edu>. |
| Pseudocode | No | The paper describes the Simple View architecture and experimental protocols in text and with diagrams, but no formal pseudocode or algorithm blocks are provided. |
| Open Source Code | Yes | Code is available at https://github.com/ princeton-vl/Simple View. |
| Open Datasets | Yes | The most widely adopted benchmark for comparing methods for point cloud classification has been Model Net40 (Wu et al., 2015b). Scan Object NN (Uy et al., 2019), a real-world point cloud benchmark. |
| Dataset Splits | Yes | Since the number of epochs is a hyper-parameter that depends on factors like data, model, optimizer, and loss, in our experiments, we create a validation set from the training set to tune the number of epochs. We then retrain the model with the complete training set to the tuned number of epochs. Scan Object NN s official repository trains and evaluates the state-of-the-art models under the same protocol. ... The hyperparameter for cropping and scaling is found on a validation set made from Scan Objec NN s train set. Model Net40... There are 9840 objects in the training set and 2468 in the test set. |
| Hardware Specification | Yes | Inference speed is measured on an NVIDIA 2080Ti averaged across 100 runs. |
| Software Dependencies | Yes | We use Py Torch (Paszke et al., 2019) to implement all models and protocols while reusing the official code wherever possible. ... Point Net and Point Net++ are officially released in Tensor Flow (Abadi et al., 2015). ... We use Adam (Kingma & Ba, 2014) with an initial learning rate of 1e-3 and a decay-on-plateau learning rate scheduler. |
| Experiment Setup | Yes | We use Adam (Kingma & Ba, 2014) with an initial learning rate of 1e-3 and a decay-on-plateau learning rate scheduler. The batch size and weight decay for each model are kept the same as the official version in Table 2. We use a batch size of 18 and no weight decay for Simple View. ... We train each model for 1000 epochs. ... We use a batch size of 20 and no weight decay to train Simple View for 300 epochs with an initial learning rate 0.001, and use the final model for testing. |