Modeling Shared responses in Neuroimaging Studies through MultiView ICA

Authors: Hugo Richard, Luigi Gresele, Aapo Hyvarinen, Bertrand Thirion, Alexandre Gramfort, Pierre Ablin

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the usefulness of our approach first on f MRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects. Moreover, the sources recovered by our model exhibit lower between-session variability than other methods. On magnetoencephalography (MEG) data, our method yields more accurate source localization on phantom data. Applied on 200 subjects from the Cam-CAN dataset it reveals a clear sequence of evoked activity in sensor and source space. 4 Experiments
Researcher Affiliation Academia Hugo Richard Inria, Université Paris-Saclay Saclay, France hugo.richard@inria.fr Luigi Gresele MPI for Intelligent Systems, MPI for Biological Cybernetics, Tübingen, Germany luigi.gresele@tuebingen.mpg.de Aapo Hyvärinen Inria, Université Paris-Saclay, Saclay, France Department of Computer Science HIIT, University of Helsinki, Finland aapo.hyvarinen@helsinki.fi Bertrand Thirion Inria, Université Paris-Saclay Saclay, France bertrand.thirion@inria.fr Alexandre Gramfort Inria, Université Paris-Saclay Saclay, France alexandre.gramfort@inria.fr Pierre Ablin Département de Mathématiques et Applications Ecole Normale Supérieure Paris, France pierre.ablin@ens.fr
Pseudocode Yes Algorithm 1: Alternate quasi-Newton method for Multi View ICA
Open Source Code Yes The code for Multi View ICA is available online at https://github.com/hugorichard/multiviewica.
Open Datasets Yes The sherlock dataset [19] contains recordings of 16 subjects watching an episode of the BBC TV show "Sherlock" (50 mins). The forrest dataset [35] was collected while 19 subjects were listening to an auditory version of the film "Forrest Gump" (110 mins). The clips dataset [59] was collected while 12 participants were exposed to short video clips (130 mins). The raiders dataset [59] was collected while 11 participants were watching the movie "Raiders of the Lost Ark" (110 mins). The raiders-full dataset [59] is an extension of the raiders dataset where the first two scenes of the movie are shown twice (130 mins). Finally, we apply Multi View ICA on the Cam-CAN dataset [66].
Dataset Splits Yes We split the data into three groups. First, we randomly choose 80% of all runs from all subjects to form the training set. Then, we randomly choose 80% of subjects and take the remaining 20% runs as testing set. The left-out runs of the remaining 20% subjects form the validation set.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, or memory) used to run the experiments.
Software Dependencies No We use Matplotlib for plotting [37] , scikit-learn for machine-learning pipelines [55], MNE for MEG processing [30], Nilearn for f MRI processing and for its Can ICA implementation [2], Brainiak [45] for its SRM implementation.
Experiment Setup Yes In the following, the noise parameter in Multiview ICA is always fixed to σ = 1. We use the function f( ) = log cosh( ), giving the non-linearity f ( ) = tanh( ). We use the Infomax cost function [8] with the same non-linearity to perform standard ICA, with the Picard algorithm [1] for fast and robust minimization of the cost function. Picard is applied with the default hyper-parameters.