Riemannian batch normalization for SPD neural networks
Authors: Daniel Brooks, Olivier Schwander, Frederic Barbaresco, Jean-Yves Schneider, Matthieu Cord
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our proposed approach with experiments in three different contexts on diverse data types: a drone recognition dataset from radar observations, and on emotion and action recognition datasets from video and motion capture data. Experiments show that the Riemannian batchnorm systematically gives better classification performance compared with leading methods and a remarkable robustness to lack of data. |
| Researcher Affiliation | Collaboration | Daniel Brooks Thales Land and Air Systems, BU ARC Limours, FRANCE Sorbonne Université, CNRS, LIP6 Paris, FRANCE Olivier Schwander Sorbonne Université, CNRS, LIP6 Paris, FRANCE Frédéric Barbaresco Thales Land and Air Systems, BU ARC Limours, FRANCE Jean-Yves Schneider Thales Land and Air Systems, BU ARC Limours, FRANCE Matthieu Cord Sorbonne Université, CNRS, LIP6 Paris, FRANCE |
| Pseudocode | Yes | Algorithm 1 Riemannian batch normalization on S+ , training and testing phase |
| Open Source Code | Yes | (experiments are made reproducible with our open-source Py Torch library, released along with the article). |
| Open Datasets | Yes | We provide the data in a pre-processed form alongside the Py Torch [40] code for reproducibility purposes. ... To spur reproducibility, we also experiment on synthetic, publicly available data. |
| Dataset Splits | Yes | We test the two SPD-based models ... over a 5-fold cross-validation, split in a train-test of 75% 25%. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions "Py Torch [40]" but does not specify its version number or any other software dependencies with their versions. |
| Experiment Setup | Yes | All networks are trained for 200 epochs using SGD with momentum set to 0.9 with a batch size of 30 and learning rate 5e 3, 1e 2 or 5e 2. |