Semi-Supervised Factored Logistic Regression for High-Dimensional Neuroimaging Data
Authors: Danilo Bzdok, Michael Eickenberg, Olivier Grisel, Bertrand Thirion, Gael Varoquaux
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that this approach yields more accurate and interpretable neural models of psychological tasks in a reference dataset, as well as better generalization to other datasets. ... All trained classification models were tested on a large, unseen test set (20% of data) in the present analyses. Across choices for n, SSFLog Reg achieved more than 95% out-of-sample accuracy, whereas supervised learning based on PCA, SPCA, ICA, and AE loadings ranged from 32% to 87% (Table 2). |
| Researcher Affiliation | Academia | INRIA, Parietal team, Saclay, France CEA, Neurospin, Gif-sur-Yvette, France |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | All Python scripts that generated the results are accessible online for reproducibility and reuse (http://github.com/banilo/nips2015). |
| Open Datasets | Yes | As the currently biggest openly-accessible reference dataset, we chose resources from the Human Connectome Project (HCP) [4]. ... The ARCHI dataset [21] provides activity maps from diverse experimental tasks... |
| Dataset Splits | No | The paper mentions testing on an "unseen test set (20% of data)" and assessing "out-of-sample performance" but does not explicitly describe a separate validation split for hyperparameter tuning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used (e.g., GPU/CPU models, memory) for running the experiments. |
| Software Dependencies | No | The analyses were performed in Python. We used nilearn to handle the large quantities of neuroimaging data [1] and Theano for automatic, numerically stable differentiation of symbolic computation graphs [5, 7]. The paper names software but does not specify version numbers. |
| Experiment Setup | Yes | The batch size was set to 100... matrix parameters were initalized by Gaussian random values multiplied by 0.004 (i.e., gain), and bias parameters were initalized to 0. ...the learning rate (0.00001), a global damping factor (10 6), and the decay rate (0.9...) |