Convolution Monge Mapping Normalization for learning on sleep data

Authors: Théo Gnassounou, Rémi Flamary, Alexandre Gramfort

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments on sleep EEG data show that CMMN leads to significant and consistent performance gains independent from the neural network architecture when adapting between subjects, sessions, and even datasets collected with different hardware.
Researcher Affiliation Collaboration Theo Gnassounou Université Paris-Saclay, Inria, CEA Palaiseau 91120, France theo.gnassounou@inria.fr Rémi Flamary IP Paris, CMAP, UMR 7641 Palaiseau 91120, France remi.flamary@polytechnique.edu Alexandre Gramfort Université Paris-Saclay, Inria, CEA Palaiseau 91120, France alexandre.gramfort@inria.fr A. Gramfort joined Meta and can be reached at agramfort@meta.com
Pseudocode Yes Algorithm 1: Train-Time CMMN; Algorithm 2: Test-Time CMMN
Open Source Code Yes In order to promote research reproducibility, code is available on github 2 [footnote 2: https://github.com/PythonOT/convolutional-monge-mapping-normalization]
Open Datasets Yes We use three publicly available datasets: Physionet (a.k.a Sleep EDF) [3], SHHS [4, 37] and MASS [2]. On all datasets, we want to perform sleep staging from 2-channels EEG signals. [...] In order to promote research reproducibility, code is available on github 2, and the datasets used are publicly available.
Dataset Splits Yes The batch size is set to 128 and the early stopping is done on a validation set corresponding to 20% of the subjects in the training set with a patience of 10 epochs.
Hardware Specification Yes The training is done on Tesla V100-DGXS-32GB with Pytorch.
Software Dependencies No Numerical computation was enabled by the scientific Python ecosystem: Num Py [46], Sci Py [47], Matplotlib [48], Seaborn [49], Py Torch [50], and MNE for EEG data processing [40]. While software components are listed, specific version numbers are not provided for them.
Experiment Setup Yes We use the Adam optimizer with a learning rate of 10 3 for Chambon and 10 4 with a weight decay of 1 10 3 for Deep Sleep Net. The batch size is set to 128 and the early stopping is done on a validation set corresponding to 20% of the subjects in the training set with a patience of 10 epochs.