Shared Space Transfer Learning for analyzing multi-site fMRI data

Authors: Tony Muhammad Yousefnezhad, Alessandro Selvitella, Daoqiang Zhang, Andrew Greenshaw, Russell Greiner

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the effectiveness of the proposed method for transferring between various cognitive tasks. Our comprehensive experiments validate that SSTL achieves superior performance to other state-of-the-art analysis techniques.
Researcher Affiliation Academia 1University of Alberta, Canada 2Nanjing University of Aeronautics and Astronautics, China 3Alberta Machine Intelligence Institute (Amii), Canada 4Purdue University Fort Wayne, United States
Pseudocode No The paper describes the proposed method using mathematical formulations and descriptive text, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code Yes SSTL is an open-source technique and can also be used via our GUI-based toolbox called easy f MRI. All algorithms for generating the experimental studies are shared as parts of our GUI-based toolbox called easy f MRI4. 4https://easyfmri.learningbymachine.com/
Open Datasets Yes Table 1 lists the 8 datasets (A to H) used for our empirical studies. These datasets are provided by Open NEURO repository... 3Available at https://openneuro.org/ and [25] Xue, G. & Aron, A.R. & Poldrack, R.A. (2008) Common neural substrates for inhibition of spoken and manual responses. Cerebral Cortex. 18(8):1923 1932.
Dataset Splits Yes In the training phase, we use a one-subject-out strategy for each training site to generate the validation set i.e., all responses of a subject are considered as the validation set, and the other responses are used as the training set.
Hardware Specification Yes Main: Giga X399, CPU: AMD Ryzen Threadripper 2920X (24 3.5 GHz), RAM: 128GB, GPU: NVIDIA Ge Force RTX 2080 SUPER (8GB memory), OS: Fedora 33, Python: 3.8.5, Pip: 20.2.3, Numpy: 1.19.2, Scipy: 1.5.2, Scikit-Learn: 0.23.2, MPI4py: 3.0.3, Py Torch: 1.6.0.
Software Dependencies Yes OS: Fedora 33, Python: 3.8.5, Pip: 20.2.3, Numpy: 1.19.2, Scipy: 1.5.2, Scikit-Learn: 0.23.2, MPI4py: 3.0.3, Py Torch: 1.6.0.
Experiment Setup Yes We tune the hyper-parameters regularization ϵ {10 2, 10 4, 10 6, 10 8}, number of features k, maximum number of iterations L by using grid search based on the performance of the validation set. As mentioned before, SSTL just sets L = 1, but other TL techniques (such as SRM, MDDL, MSMD, etc.), we consider L {1, 2, ..., 50} . For selecting the number of features k, we first let k1 = min(V, Td) for d = 1 . . . e D [4]. Then, we benchmark the performance of analysis by using k = αk1, where α = {0.1, 0.5, 1, 1.1, 1.5, 2}. We use ν-support vector machine (ν-SVM) [29] for classification analysis.