Rethinking the compositionality of point clouds through regularization in the hyperbolic space
Authors: Antonio Montanaro, Diego Valsesia, Enrico Magli
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study the performance of our regularizer Hy Co Re on the synthetic dataset Model Net40 [27] (12,331 objects with 1024 points, 40 classes) and on the real dataset Scan Object NN [28] (15,000 objects with 1024 points, 15 classes). |
| Researcher Affiliation | Academia | Antonio Montanaro Politecnico di Torino, Italy antonio.montanaro@polito.it Diego Valsesia Politecnico di Torino, Italy diego.valsesia@polito.it Enrico Magli Politecnico di Torino, Italy enrico.magli@polito.it |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. Methods are described in prose. |
| Open Source Code | Yes | Code of the project: https://github.com/diegovalsesia/HyCoRe |
| Open Datasets | Yes | We study the performance of our regularizer Hy Co Re on the synthetic dataset Model Net40 [27] (12,331 objects with 1024 points, 40 classes) and on the real dataset Scan Object NN [28] (15,000 objects with 1024 points, 15 classes). We use standard datasets and splits well known by the literature and report all values of the hyperparameters for our tests. |
| Dataset Splits | Yes | We use standard datasets and splits well known by the literature and report all values of the hyperparameters for our tests. |
| Hardware Specification | Yes | Models are trained on an Nvidia A6000 GPU. |
| Software Dependencies | No | The paper states: |
| Experiment Setup | Yes | We use f = 256 features to be comparable to the official implementations in the Euclidean space, then we test the model over different embedding dimensions in the ablation study. Moreover, we set α = β = 0.01, γ = 1000 and δ = 4. For the number of points of each part N , we select a random number between 200 and 600, and for the whole object a random number between 800 and 1024 to ensure better flexibility of the learned to model to part sizes. We train the models using Riemannian SGD optimization. |