MetaRLEC: Meta-Reinforcement Learning for Discovery of Brain Effective Connectivity

Authors: Zuozhen Zhang, Junzhong Ji, Jinduo Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments on both simulated and real-world data to demonstrate the efficacy of our proposed method.Systematic experiments conducted on both simulated and real f MRI datasets demonstrate that the proposed method surpasses several state-of-the-art approaches in its performance on small-sample f MRI data.
Researcher Affiliation Academia Zuozhen Zhang, Junzhong Ji, Jinduo Liu * Beijing Municipal Key Laboratory of Multimedia and Intelligent Software Technology, Beijing Institute of Artificial Intelligence, Faculty of Information Technology, Beijing University of Technology, Beijing, China zzz3582@emails.bjut.edu.cn, jjz01@bjut.edu.cn, jinduo@bjut.edu.cn
Pseudocode Yes Algorithm 1: Meta RLEC Input: Original f MRI time-series data. Output: Brain EC network.
Open Source Code Yes The code is available at https://github.com/layzoom/Meta RLEC.
Open Datasets Yes The benchmark simulation datasets 1 we used are supported by Smith et al. (Smith et al. 2011), which are generated by dynamic causal models (DCM).1https://www.fmrib.ox.ac.uk/datasets/netsim/index.html 2https://github.com/shahpreya/MTlnet
Dataset Splits Yes dtrn is for training and dval is for validation of online learning.Algorithm 1: ...7: Sample training batch dtrn from X; ... 14: Sample testing batch dval from X;
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned in the paper.
Software Dependencies No The paper describes model components and mathematical functions but does not provide specific software names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes where λ ≥ 0 is a parameter that controls the sparsity of brain EC networks and A(G) is the sparse penalty function as A(G) = G 1.where η denotes the learning rate.The parameters of the algorithms under comparison are selected according to the existing literature and we fine-tune 10 subjects to select the optimal parameters.