Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Lie Group Approach to Riemannian Batch Normalization
Authors: Ziheng Chen, Yue Song, Yunmei Liu, Nicu Sebe
ICLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on widely-used SPD benchmarks demonstrate the effectiveness of our framework. |
| Researcher Affiliation | Academia | 1 University of Trento, 2 University of Louisville |
| Pseudocode | Yes | Algorithm 1: Lie Group Batch Normalization (Lie BN) Algorithm |
| Open Source Code | Yes | The code is available at https://github.com/Git ZH-Chen/Lie BN.git. |
| Open Datasets | Yes | Radar dataset (Brooks et al., 2019b), HDM05 dataset (Mรผller et al., 2007), FPHA (Garcia-Hernando et al., 2018), Hinss2021 dataset (Hinss et al., 2021) |
| Dataset Splits | Yes | Ten-fold experiments on the Radar, HDM05, and FPHA datasets are carried out with randomized initialization and split (split is officially fixed for the FPHA dataset), while on the Hinss2021 dataset, models are fit and evaluated with a randomized leave 5% of the sessions (inter-session) or subjects (inter-subject) out cross-validation scheme. |
| Hardware Specification | Yes | All experiments use an Intel Core i9-7960X CPU with 32GB RAM and an NVIDIA Ge Force RTX 2080 Ti GPU. |
| Software Dependencies | No | The paper mentions software like PyTorch, MOABB, MNE, and geoopt, but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | The experiments are conducted with a learning rate of 5e 3, batch size of 30, and training epoch of 200 on the Radar, HDM05, and FPHA datasets. For the Hinss2021 dataset, following Kobler et al. (2022a), we use a learning rate of 1e 3 with a weight decay of 1e 4, a batch size of 50, and a training epoch of 50. |