Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
SPDIM: Source-Free Unsupervised Conditional and Label Shift Adaptation in EEG
Authors: Shanglin Li, Motoaki Kawanabe, Reinmar Kobler
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In simulations, we demonstrate that SPDIM can compensate for the shifts under our generative model. Moreover, using public EEG-based brain-computer interface and sleep staging datasets, we show that SPDIM outperforms prior approaches. We conducted simulations and experiments with public EEG motor imagery and sleep stage datasets to evaluate our proposed framework empirically. |
| Researcher Affiliation | Academia | 1 Nara Institute of Science and Technology (NAIST), Nara, Japan 2 Department of Dynamic Brain Imaging, ATR, Kyoto, Japan 3 Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan Equal contribution EMAIL |
| Pseudocode | No | The paper describes methods and procedures in narrative text and mathematical formulations but does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | We used publicly available Python code for baseline methods and implemented custom methods using the packages torch (Paszke et al., 2019), scikit-learn (Pedregosa et al., 2011), braindecode (Schirrmeister et al., 2017), geoopt (Kochurov et al., 2020) and py Riemann Barachant et al. (2023). |
| Open Datasets | Yes | using public EEG-based brain-computer interface and sleep staging datasets, we show that SPDIM outperforms prior approaches. We considered 4 public motor imagery datasets: BNCI2014001 (Tangermann et al., 2012) (9 subjects/2 sessions/4 classes/22 channels), BNCI2015001 (Faller et al., 2012) (12/2-3/2/13), Zhou2016 (Zhou et al., 2016) (4/3/3/14), and BNCI2014004 (Leeb et al., 2007) (9/5/2/3). We considered 4 public sleep stage datasets: CAP (Terzano et al., 2001; Goldberger et al., 2000), Dreem (Guillot et al., 2020), HMC (Alvarez-Estevez & Rijsman, 2021a;b), and ISRUC (Khalighi et al., 2016). |
| Dataset Splits | Yes | We split the source domains data into training and validation sets (80% / 20% splits, randomized, stratified by domain and label) and iterated through the training set for 100 epochs. Early stopping were fit with a single stratified (domain and labels) inner train/validation split. To evaluate the methods, we employed a 10-fold grouped cross-validation scheme, ensuring that each group (i.e., subject) appears either in the training set (i.e., source domains) or the test set (i.e., target domains). |
| Hardware Specification | Yes | We conducted the experiments on standard computation PCs with 32-core CPUs, 128 GB of RAM, and a single GPU. |
| Software Dependencies | No | We used publicly available Python code for baseline methods and implemented custom methods using the packages torch (Paszke et al., 2019), scikit-learn (Pedregosa et al., 2011), braindecode (Schirrmeister et al., 2017), geoopt (Kochurov et al., 2020) and py Riemann Barachant et al. (2023). We used the implementation provided in braindecode (Schirrmeister et al., 2017) for all architectures above, and stick to all model hyper-parameters as provided in the braindecode. We used the cross-entropy loss as the training objective, employing the Py Torch framework (Paszke et al., 2019) with extensions for structured matrices (Ionescu et al., 2015) and manifold-constrained gradients (Absil et al., 2008) to propagate gradients through the layers. MNE-python (Gramfort et al., 2014) for pre-processing. |
| Experiment Setup | Yes | The first convolutional layer operates convolution along the temporal dimension, implementing a finite impulse response (FIR) filter bank (4 filters)... The second convolutional layer applies spatio-spectral filters (40 filters)... A subsequent Bi Map layer projects covariance matrices to a D-dimensional subspace (D-20)... Finally, the classification head gĻ is parametrized as a linear layer with softmax activations. Specifically, gradients were estimated using fixed-size mini-batches (50 observations; 10 per domain across 5 domains) and updated parameters with the Riemannian ADAM optimizer (B ecigneul & Ganea, 2018) (10^-3 learning rate, 10^-4 weight decay, β1 = 0.9, β2 = 0.999). We set the temperature scaling factor in (21) to T = 2 for binary classification problems and T = 0.8 otherwise. |