Steerable 3D Spherical Neurons

Authors: Pavlo Melnyk, Michael Felsberg, Mårten Wadenbäck

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we use a synthetic point set and real-world 3D skeleton data to verify our theoretical findings. The code is available at https://github.com/pavlo-melnyk/ steerable-3d-neurons. In this section, we describe the experiments we conducted to confirm our findings presented in Section 4.
Researcher Affiliation Academia 1Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, SE-581 83 Linköping, Sweden. Correspondence to: Pavlo Melnyk <pavlo.melnyk@liu.se>, Michael Felsberg <michael.felsberg@liu.se>.
Pseudocode No The paper describes procedural steps in paragraph form but does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/pavlo-melnyk/ steerable-3d-neurons.
Open Datasets Yes 3D Tetris Following the experiments reported by Thomas et al. (2018), Weiler et al. (2018a), and Melnyk et al. (2021), we use the following synthetic point set of eight 3D Tetris shapes (Thomas et al., 2018)... 3D skeleton data We also perform experiments on realworld data to substantiate the validity of our theoretical results. We use the UTKinect-Action3D dataset introduced by Xia et al. (2012)...
Dataset Splits Yes From each action sequence, we randomly select 50% of the skeletons for the test set and 20% of the remainder as validation data. The resulting data split is as follows: 2295 training shapes, 670 shapes for validation, and 3062 test shapes, approximately corresponding to 38%, 11%, 51% of the total amount of the skeletons, respectively.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory, or specific computing environments) used to run the experiments.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al., 2019)' but does not specify a version number for this or any other software dependency.
Experiment Setup Yes We first train a two-layer (ancestor) multilayer geometric perceptron (MLGP) model (Melnyk et al., 2021), where the first layer consists of geometric neurons and the output layer of hypersphere neurons... we use only one configuration with five hidden units for the Tetris data and twelve hidden units... We train both models by minimizing the cross-entropy loss function and use the Adam optimizer (Kingma & Ba, 2015) with the default hyperparameters (the learning rate is 0.001). The Tetris MLGP learns to classify the eight shapes in the canonical orientation perfectly after 2000 epochs, whereas the Skeleton model trained for 10000 epochs...