Learning brain regions via large-scale online structured sparse dictionary learning

Authors: Elvis DOHMATOB, Arthur Mensch, Gael Varoquaux, Bertrand Thirion

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Preliminary xperiments on brain data show that our proposed method extracts structured and denoised dictionaries that are more intepretable and better capture inter-subject variability in small medium, and large-scale regimes alike, compared to state-of-the-art models.
Researcher Affiliation Academia Parietal Team, INRIA / CEA, Neurospin, Université Paris-Saclay, France
Pseudocode Yes Algorithm 1 Online algorithm for the dictionary-learning problem (2) and Algorithm 2 BCD dictionary update with Laplacian prior are provided.
Open Source Code No The authors implementation of the proposed Smooth-SODL (2) model will soon be made available as part of the Nilearn package [2].
Open Datasets Yes Our experiments were done on task f MRI data from 500 subjects from the HCP Human Connectome Project dataset [20].
Dataset Splits No The input data X were shuffled and then split into two groups of the same size. There is no explicit mention of validation splits or percentages.
Hardware Specification Yes All experiments were run on a single CPU of laptop.
Software Dependencies No The paper mentions "implemented as part of the Nilearn open-source library Python library [2]" but does not specify version numbers for Nilearn or Python.
Experiment Setup Yes Require: Regularization parameters α, γ > 0; initial dictionary V Rp k, number of passes / iterations T on the data. ... We typically use we use mini-batches of size η = 20. ... we sought a decomposition into a dictionary of k = 40 atoms (components). ... Concerning the α parameter, inspired by [26], we have found the following time-varying data-adaptive choice for the α parameter to work very well in practice: α = αt t 1/2. (10)