Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals
Authors: Clément Bonet, Benoı̂t Malézieux, Alain Rakotomamonjy, Lucas Drumetz, Thomas Moreau, Matthieu Kowalski, Nicolas Courty
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare the runtime to the Wasserstein distance with Affine-Invariant (AIW) and Log-Euclidean (LEW) metrics, and to Sinkhorn algorithm (LES) which is a classical alternative to Wasserstein to reduce the computational cost. When enough samples are available, then computing the Wasserstein distance takes more time than computing the cost matrix, and SPDSW is fast to compute. Numerical results We demonstrate the ability of our algorithm to perform well on brain-age prediction on the largest publicly available MEG data-set Cam-CAN (Taylor et al., 2017), which contains recordings from 646 subjects at rest. |
| Researcher Affiliation | Collaboration | 1Universit e Bretagne Sud, LMBA 2Universit e Paris-Saclay, Inria, CEA 3Criteo AI Lab 4Universit e de Rouen, LITIS 5IMT Atlantique, Lab-STICC 6Universit e Paris Saclay, CNRS, LISN 7Universit e Bretagne Sud, IRISA. |
| Pseudocode | Yes | Algorithm 1 Computation of SPDSW. Algorithm 2 Computation of HSPDSW. |
| Open Source Code | Yes | Code is available at https://github.com/clbonet/SPDSW. |
| Open Datasets | Yes | Numerical results We demonstrate the ability of our algorithm to perform well on brain-age prediction on the largest publicly available MEG data-set Cam-CAN (Taylor et al., 2017), which contains recordings from 646 subjects at rest. we focus on cross-session classification for the BCI IV 2.a Competition dataset (Brunner et al., 2008) |
| Dataset Splits | Yes | Figure 3 shows that SPDSW and log SW (1000 projections, time-frames of 2s) perform best in average on 10-folds crossvalidation for 10 random seeds, compared to the baseline with Ridge regression (Sabbagh et al., 2019) and to Kernel Ridge regression based on the Log-Euclidean metric, with identical pre-processing. |
| Hardware Specification | Yes | The computations have been performed on a GPU NVIDIA Tesla V100-DGXS 32GB using Py Torch (Paszke et al., 2017). We compare accuracies and runtimes for several methods run on a GPU Tesla V100-DGXS32GB. |
| Software Dependencies | No | The paper mentions software like PyTorch, scikit-learn, POT, and Geoopt, but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | Figure 3 shows that SPDSW and log SW (1000 projections, time-frames of 2s) perform best in average on 10-folds crossvalidation for 10 random seeds. For the sliced discrepancies, we always use L = 500 projections which we draw only once at the beginning. When optimizing over particles, we used a learning rate of 1000 for the sliced methods and of 10 for Wasserstein and Sinkhorn. The number of epochs was fixed at 500 for the cross-session task and for the cross-subject tasks. For the basic transformations, we always use 500 epochs and we choose a learning rate of 1e 1 on cross session and 5e 1 on cross subject for sliced methods, and of 1e 2 for Wasserstein and Sinkhorn. For the Sinkhorn algorithm, we use = 10 with the default hyperparameters from the POT implementation. Moreover, we only use one translation and rotation for the transformation. |