Learning dynamic representations of the functional connectome in neurobiological networks

Authors: Luciano Dyballa, Samuel Lang, Alexandra Haslund-Gourley, Eviatar Yemini, Steven W. Zucker

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results from our analysis are experimentally validated, confirming that our method is able to robustly predict causal interactions between neurons to generate behavior. ... Although the emphasis here is on the algorithm, our results are confirmed with experiments that silence specific neurons predicted by our method to directly measure their impact on behavior. ... VALIDATION EXPERIMENTS ... COMPARISON WITH OTHER COMMUNITY DETECTION METHODS
Researcher Affiliation Academia Luciano Dyballa1, Samuel Lang2, Alexandra Haslund-Gourley1, Eviatar Yemini2 , Steven W. Zucker1 1Dept. Computer Science, Yale University; 2Dept. Neurobiology, UMass Chan Medical School
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Code is available at https://github.com/dyballa/dynamic-connectomes.
Open Datasets Yes Our approach was applied to the dataset from Yemini et al. (2021), in which calcium activity from 189 neurons in the head of C. elegans were recorded from a total of 21 individual worms.
Dataset Splits No The paper describes the dataset used (Yemini et al. (2021)) and benchmark networks (Lancichinetti-Fortunato-Radicchi (LFR) algorithm) but does not specify explicit training, validation, or test splits for these datasets. The core method is unsupervised.
Hardware Specification No No specific hardware (e.g., GPU models, CPU types, or cloud instance specifications) used for running experiments was mentioned in the paper.
Software Dependencies No The paper mentions "the tensortools Python library (Williams, 2024)" and "the graph-tool Python library", but does not provide specific version numbers for these software components.
Experiment Setup Yes Our logic for selecting the number of tensor components, R, was to use as many as possible to minimize reconstruction error, provided the results across multiple random initializations remained stable (i.e., small variance). ... Based on Fig. A3a, the error variability (std. dev. across 15 runs) reaches a minimum when R 14 15 (shaded area), then increases sharply for R > 15. We therefore selected R=15. ... The NWSBM method was compared to other popular algorithms for community detection ... The quality of their results was evaluated using normalized mutual information (NMI) ... Our benchmark consisted of 9 different types of network (see Appendix A.2 and Fig. A1 for details).