Rotation-Invariant Local-to-Global Representation Learning for 3D Point Cloud

Authors: SEOHYUN KIM, JaeYoo Park, Bohyung Han

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section presents the experimental results of our algorithm compared to existing approaches. We demonstrate the effectiveness of our framework via several ablation studies.
Researcher Affiliation Academia Computer Vision Laboratory & ASRI, Seoul National University {goodbye61, bellos1203, bhhan}@snu.ac.kr
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The source codes are available on our project page1. 1https://cvlab.snu.ac.kr/research/rotation_invariant_l2g/
Open Datasets Yes To evaluate the robustness to rotation, we compare the proposed algorithm, RI-GCN, with recent 3D object classification approaches on Model Net40 [26], a widely used benchmark.
Dataset Splits Yes It consists of CAD models in 40 categories and contains 9,843 and 2,468 shapes for training and testing, respectively.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models) used for experiments were mentioned in the paper.
Software Dependencies No The paper does not provide specific version numbers for software dependencies like libraries or frameworks.
Experiment Setup No The paper does not explicitly provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed optimizer settings. It mentions following evaluation protocols of previous works but does not list their own setup details.