Building Neural Networks on Matrix Manifolds: A Gyrovector Space Approach

Authors: Xuan Son Nguyen, Shuo Yang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we report results of our experiments for two applications, i.e., human action recognition and knowledge graph completion. Details on the datasets and our experimental settings are given in Appendix A.
Researcher Affiliation Academia 1ETIS, UMR 8051, CY Cergy Paris Universit e, ENSEA, CNRS, Cergy, France.
Pseudocode No The paper describes mathematical concepts and derivations but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://github.com/spratmnt/har [...] Code available at https://github.com/spratmnt/kgc
Open Datasets Yes HDM05 (Müller et al., 2007) [...] FPHA (Garcia-Hernando et al., 2018) [...] NTU60 (Shahroudy et al., 2016) [...] WN18RR (Miller, 1995) [...] FB15k-237 (Toutanova et al., 2015)
Dataset Splits No For HDM05, FPHA, and NTU60 datasets, the paper describes training and testing splits (e.g., '2 subjects are used for training and the remaining 3 subjects are used for testing' for HDM05), but does not explicitly provide details for a validation split. For Knowledge Graph Completion, it mentions 'Early stopping is used when the MRR score of the model on the validation set does not improve after 500 epochs', indicating the use of a validation set, but its specific split size or percentage is not provided.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper states that networks are 'implemented with Pytorch framework' but does not provide specific version numbers for Pytorch or any other software dependencies.
Experiment Setup Yes Our networks are implemented with Pytorch framework. They are trained using cross-entropy loss and Adadelta optimizer for 2000 epochs. The learning rate is set to 10⁻³. We use a batch size of 32 for HDM05 and FPHA datasets, and a batch size of 256 for NTU60 dataset. [...] They are trained using binary cross-entropy loss and SGD optimizer for 2000 epochs. The learning rate is set to 10⁻³ with weight decay of 10⁻⁵. The batch size is set to 4096. The number of negative samples is set to 10.