Benchmarking and Analyzing Point Cloud Classification under Corruptions
Authors: Jiawei Ren, Liang Pan, Ziwei Liu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we aim to rigorously benchmark and analyze point cloud classification under corruptions. To conduct a systematic investigation, we first provide a taxonomy of common 3D corruptions and identify the atomic corruptions. Then, we perform a comprehensive evaluation on a wide range of representative point cloud models to understand their robustness and generalizability. Our benchmark results show that although point cloud classification performance improves over time, the state-of-the-art methods are on the verge of being less robust. |
| Researcher Affiliation | Academia | Jiawei Ren 1 Liang Pan 1 Ziwei Liu 1 1S-Lab, Nanyang Technological University. Correspondence to: Ziwei Liu <ziwei.liu@ntu.edu.sg>. |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Code is available at https://github. com/jiawei-ren/modelnetc. |
| Open Datasets | Yes | Model Net40 is one of the most commonly used benchmarks in point cloud classification, and it collects 12,311 CAD models in 40 categories (9,843 for training and 2,468 for testing). ... Based on Model Net40 and the settings by (Qi et al., 2017b), we further corrupt the Model Net40 test set with the aforementioned seven atomic corruptions to establish a comprehensive test-suite Model Net-C. To achieve fair comparisons and meanwhile following the OOD evaluation principle, we use the same training set with Model Net40. |
| Dataset Splits | No | The paper describes Model Net40 as having 9,843 for training and 2,468 for testing. It states Model Net-C is built upon the 'validation set of Model Net40' but this is not a distinct, quantified validation split separate from the main train/test split of ModelNet40 itself. Therefore, a clear, separate validation split for reproduction is not provided. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as programming languages, libraries, or frameworks (e.g., Python version, PyTorch version, CUDA version). |
| Experiment Setup | Yes | Two conventional augmentations are used during training: 1) random anisotropic scaling in the range [2/3, 3/2]; 2) random translation in the range [-0.2, +0.2]. ... For RPC: We train the model for 250 epochs with a batch size of 32. We use SGD with momentum 0.9 for optimization. We use a cosine annealing scheduler to gradually decay the learning rate from 1e-2 to 1e-4. ... For WOLFMix: We set the number of anchors to 4, sampling method to farthest point sampling, kernel bandwidth to 0.5, maximum local rotation range to 10 degrees, maximum local scaling to 3, and maximum local translation to 0.25. Aug Tune proposed along with Point WOLF is not used in training. For the mixing step, we use the default hyper-parameters in RSMix (Lee et al., 2021). We set RSMix probability to 0.5, β to 1.0, and the maximum number of point modifications to 512. For training, the number of neighbors in k-NN is reduced to 20 and the number of epochs is increased to 500 for all methods. |