Learning Macroscopic Brain Connectomes via Group-Sparse Factorization

Authors: Farzane Aminmansour, Andrew Patterson, Lei Le, Yisu Peng, Daniel Mitchell, Franco Pestilli, Cesar F. Caiafa, Russell Greiner, Martha White

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that this greedy algorithm significantly improves on a standard greedy algorithm, called Orthogonal Matching Pursuit. We conclude with an analysis of the solutions found by our method, showing we can accurately reconstruct the diffusion information while maintaining contiguous fascicles with smooth direction changes.
Researcher Affiliation Academia 1Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada 2Department of Computer Science, Indiana University, Bloomington, Indiana, USA 3Department of Computer Science, Northeastern University, Boston, Massachusetts, USA 4Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana, USA 5Instituto Argentino de Radioastronomía CCT La Plata, CONICET / CIC-PBA, V. Elisa, Argentina 6Tensor Learning Unit Center for Advanced Intelligence Project, RIKEN, Tokyo, Japan
Pseudocode Yes The full algorithm consists of two key steps. The first step is to screen the orientations, using Orientation Greedy in Algorithm 1.
Open Source Code Yes The code is available at: https://github.com/framinmansour/Learning-Macroscopic-Brain-Connectomesvia-Group-Sparse-Factorization
Open Datasets Yes We learn on data generated by an expert connectome solution within the ENCODE model (Appendix E.2). Our data is derived from the Human Connectome Project (HCP) dataset [1].
Dataset Splits No The paper does not provide specific details on validation dataset splits, percentages, or methodology. It focuses on evaluating against a 'ground truth' rather than a standard train/validation/test split for model generalization.
Hardware Specification No The paper mentions
Software Dependencies No The paper does not list specific software names with version numbers that are ancillary to the experiments.
Experiment Setup Yes We applied batch gradient decent with 15 iterations and a dynamic step-size value which started from 1e-5 and decreased each time the algorithm could not improve the total objective error. The ℓ1 and group regularizer coefficients were chosen to be 10 for most of the experiments, we tested the following values of the regularization coefficient [10 3, 10 2, . . . , 102, 103] and found that results were negligibly affected. For ℓ1 regularizer, we applied a proximal operator to truncate weights less than the threshold of 0.001.