Attentive Transfer Entropy to Exploit Transient Emergence of Coupling Effect

Authors: Xiaolei Ru, XINYA ZHANG, Zijia Liu, Jack Murdoch Moore, Gang Yan

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results show that, without any prior knowledge of dynamics, ATEn explicitly identifies areas where the strength of coupling-drive is distinctly greater than zero. This innovation substantially improves reconstruction performance for both synthetic and real directed coupling networks using data generated by neuronal models widely used in neuroscience. Experiment
Researcher Affiliation Academia 1 School of Physical Science and Engineering, National Key Laboratory of Autonomous Intelligent Unmanned Systems, MOE Frontiers Science Center for Intelligent Autonomous Systems, Tongji University, Shanghai, China {ruxl,xinyazhang,xwzliuzijia,jackmoore,gyan}@tongji.edu.cn
Pseudocode Yes Implement of our method is presented as Alg. 1 in Appendix C.
Open Source Code Yes See codes in Supplementary Materials for more details.
Open Datasets Yes For real networks, we select six neurological connectivity datasets as presented in Table 1, each from a different species: Cat, Macaque, Mouse, C. elegans, Rat and Drosophila. Details are provided in Appendix E.
Dataset Splits Yes The size of validation and test set is 100 and 400 respectively, with a uniform distribution of positive and negative samples as in the training set, which are randomly selected from all possible ordered pairs within the entire network. ... randomly sampled training/validation/test set (20/100/1000) in C. elegans (left) and Drosophila (right) connectomes.
Hardware Specification Yes We run all experiments in this work on a local machine with two NVIDIA V100 32GB GPUs.
Software Dependencies No The paper mentions 'ADAM [30] optimizer' but does not specify its version number or any other software libraries with specific version details necessary for reproducibility.
Experiment Setup Yes We employ a 4-layer convolutional neural network for model gα and hη, and a 5-layer fully-connected neural network for model fθ and fϕ. We use the ADAM [30] optimizer with initial learning rate of 10 3 for the classifier hη and 10 4 for the others. The learning rate decays exponentially by gamma = 0.999 per epoch. The batch size for stage 1 is 32 and for stage 2 is 10. The number of training epochs is 400.