CRIN: Rotation-Invariant Point Cloud Analysis and Rotation Estimation via Centrifugal Reference Frame
Authors: Yujing Lou, Zelin Ye, Yang You, Nianjuan Jiang, Jiangbo Lu, Weiming Wang, Lizhuang Ma, Cewu Lu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our method achieves rotation invariance, accurately estimates the object rotation, and obtains state-of-the-art results on rotation-augmented classification and part segmentation. ... In this section, we evaluate CRIN on several 3D object datasets and conduct the ablation study. |
| Researcher Affiliation | Collaboration | Yujing Lou1, Zelin Ye1, Yang You1, Nianjuan Jiang2, Jiangbo Lu2, Weiming Wang1, Lizhuang Ma1*, Cewu Lu1* 1 Shanghai Jiao Tong University 2 Smart More |
| Pseudocode | No | The paper does not include any explicitly labeled pseudocode or algorithm blocks with structured steps formatted like code. |
| Open Source Code | No | The paper does not provide any concrete access information, such as a repository link or an explicit statement about the release of source code for the methodology described. |
| Open Datasets | Yes | We evaluate CRIN on Model Net40 dataset (Wu et al. 2015) for object classification. ... We use the Shape Net part dataset (Yi et al. 2016) for 3D part segmentation. |
| Dataset Splits | Yes | We follow (Qi et al. 2017a) to split the dataset into 9843 and 2468 point clouds for training and testing, respectively. ... The train/test splitting is according to (Qi et al. 2017a). |
| Hardware Specification | Yes | The experiments are conducted on a single Ge Force RTX 2080Ti GPU and an Intel(R) Core(TM) i9-7900X @ 3.30GHz CPU. |
| Software Dependencies | No | The paper mentions 'We use Adam (Kingma and Ba 2014) optimizer', but it does not specify any software components with their version numbers (e.g., 'PyTorch 1.9', 'Python 3.8'). |
| Experiment Setup | Yes | We use Adam (Kingma and Ba 2014) optimizer during training and set the initial learning rate as 0.001. The batch size is 32, with about 2 minutes per training epoch on one GPU. |