Learning Neural Representations of Human Cognition across Many fMRI Studies
Authors: Arthur Mensch, Julien Mairal, Danilo Bzdok, Bertrand Thirion, Gael Varoquaux
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our multi-dataset classification model achieves the best prediction performance on several large reference datasets, compared to models without cognitive-aware low-dimension representations; it brings a substantial performance boost to the analysis of small datasets, and can be introspected to identify universal template cognitive concepts. We demonstrate the performance of our model on several openly accessible and rich reference datasets in the brain-imaging domain. |
| Researcher Affiliation | Academia | Arthur Mensch Inria arthur.mensch@m4x.org Julien Mairal Inria julien.mairal@inria.fr Danilo Bzdok Department of Psychiatry, RWTH danilo.bzdok@rwth-aachen.de Bertrand Thirion Inria bertrand.thirion@inria.fr Gaël Varoquaux Inria gael.varoquaux@inria.fr Inria, CEA, Université Paris-Saclay, 91191 Gif sur Yvette, France Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK, 38000 Grenoble, France |
| Pseudocode | No | The paper describes the model and training process using text and mathematical equations, and a diagram (Figure 1), but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for reproducing experiments is available at http://github.com/arthurmensch/cogspaces. |
| Open Datasets | Yes | Our experimental study features 5 publicly-available task f MRI study. We use all resting-state records from the HCP900 release [1] to compute the sparse dictionaries that are used in the first dimension reduction materialized by Wg. We succinctly describe the conditions of each dataset we refer the reader to the original publications for further details. HCP: gambling, working memory, motor, language, social and relational tasks. 800 subjects. Archi [31]: localizer protocol, motor, social and relational task. 79 subjects. Brainomics [32]: localizer protocol. 98 subjects. Camcan [33]: audio-video task, with frequency variation. 606 subjects. LA5c consortium [34]: task-switching, balloon analog risk taking, stop-signal and spatial working memory capacity tasks high-level tasks. 200 subjects. |
| Dataset Splits | No | Finally, test accuracy is measured on half of the subjects of each dataset, that are removed from training sets beforehand. Benchmarks are repeated 20 times with random split folds to estimate the variance in performance. |
| Hardware Specification | Yes | Training the model on projected data (W g xi)i takes 10 minutes on a conventional single CPU machine with an Intel Xeon 3.21Ghz. |
| Software Dependencies | No | The paper mentions 'pytorch', 'nilearn [35]', and 'scikit-learn [36]' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Our model involves a few noncritical hyperparameters: we use batches of size 256, set the latent dimension l = 100 and use a Dropout rate r = 0.75 in the latent cognitive space this value perform slightly better than r = 0.5. We use a multi-scale dictionary with 16, 64 and 512 components, as it yields the best quantitative and qualitative results. |