Learning to Orient Surfaces by Self-supervised Spherical CNNs
Authors: Riccardo Spezialetti, Federico Stella, Marlon Marcon, Luciano Silva, Samuele Salti, Luigi Di Stefano
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on several public datasets prove its effectiveness at orienting local surface patches as well as whole objects. |
| Researcher Affiliation | Academia | 1 Department of Computer Science and Engineering (DISI), University of Bologna, Italy 2 Federal University of Technology ParanĂ¡, Dois Vizinhos, Brazil 3 Federal University of ParanĂ¡, Curitiba, Brazil |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code for training and testing Compass is available at https://github.com/CVLAB-Unibo/compass. |
| Open Datasets | Yes | We train Compass on 3DMatch [44] following the standard procedure of the benchmark, with 48 scenes for training and 6 for validation. ... We train Compass on Model Net40 [47] using 8,192 samples for training and 1,648 for validation. ... We also performed a qualitative evaluation of the transfer learning performance of Compass by orienting clouds from the Shape Net [2] dataset. |
| Dataset Splits | Yes | We train Compass on 3DMatch following the standard procedure of the benchmark, with 48 scenes for training and 6 for validation. ... We train Compass on Model Net40 using 8,192 samples for training and 1,648 for validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions optimizers like 'Adam [18]' and frameworks like 'Point Net [31]', but does not provide specific version numbers for programming languages, libraries, or other software dependencies. |
| Experiment Setup | Yes | Network Architecture: The network architecture comprises 1 S2 layer followed by 3 SO(3) layers, with bandwidth B = 24 and the respective number of output channels are set to 40, 20, 10, 1. The input spherical signal is computed with K = 4 channels. ... We use Adam [18] as optimizer, with 0.001 as the learning rate when training on 3DMatch and for test-time adaptation on Stanford Views, and 0.0005 for adaptation on ETH. ... We also apply test-time adaptation on ETH and Stanford Views: the test set is used for a quick 2-epoch training with a 20% validation split, right before being used to assess the performance of the network. |