Deep Hyperalignment

Authors: Muhammad Yousefnezhad, Daoqiang Zhang

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental studies on multi-subject f MRI analysis confirm that the DHA method achieves superior performance to other state-of-the-art HA algorithms.
Researcher Affiliation Academia Muhammad Yousefnezhad, Daoqiang Zhang College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics {myousefnezhad,dqzhang}@nuaa.edu.cn
Pseudocode Yes Algorithm 1 Deep Hyperalignment (DHA)
Open Source Code No The paper states: 'This paper provides a detailed description of HA methods in the supplementary materials (https://sourceforge.net/ projects/myousefnezhad/files/DHA/)'. This statement describes the content of the link as a 'detailed description' rather than explicitly stating it contains the source code for the methodology presented in this paper.
Open Datasets Yes This paper utilizes 5 datasets, shared by Open f MRI (https://openfmri.org), for running empirical studies of this section.
Dataset Splits Yes In addition, leave-one-subject-out cross-validation is utilized for partitioning datasets to the training set and testing set.
Hardware Specification Yes 2DEL, CPU = Intel Xeon E5-2630 v3 (8 2.4 GHz), RAM = 64GB, GPU = Ge Force GTX TITAN X (12GB memory), OS = Ubuntu 16.04.3 LTS, Python = 3.6.2, Pip = 9.0.1, Numpy = 1.13.1, Scipy = 0.19.1, Scikit-Learn = 0.18.2, Theano = 0.9.0.
Software Dependencies Yes Python = 3.6.2, Pip = 9.0.1, Numpy = 1.13.1, Scipy = 0.19.1, Scikit-Learn = 0.18.2, Theano = 0.9.0.
Experiment Setup Yes Consequently, three hidden layers (C = 5) and the regularized parameters ϵ = {10 4, 10 6, 10 8} are employed in the DHA method. In addition, the number of units in the intermediate layers are considered U (m) = KV , where m = 2:C-1, C is the number of layers, V denotes the number of voxels and K is the number of stimulus categories in each dataset1. Further, three distinctive activation functions are employed, i.e. Sigmoid (g(x) = 1/1 + exp( x)), Hyperbolic (g(x) = tanh(x)), and Rectified Linear Unit or Re LU (g(x) = ln(1 + exp(x)).