Electrophysiological Brain Source Imaging via Combinatorial Search with Provable Optimality

Authors: Guihong Wan, Meng Jiao, Xinglong Ju, Yu Zhang, Haim Schweitzer, Feng Liu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on both synthetic data and real epilepsy EEG data demonstrated that the proposed algorithm could faithfully reconstruct the source activation in the brain.
Researcher Affiliation Academia 1 Massachusetts General Hospital, Harvard Medical School 2 School of Systems and Enterprises, Stevens Institute of Technology 3 Division of Management Information Systems, The University of Oklahoma 4 Department of Bioengineering, Lehigh University 5 Department of Computer Science, The University of Texas at Dallas
Pseudocode Yes Algorithm 1: The ESI-A Algorithm.
Open Source Code Yes The code: https://github.com/ghwanlab/ESI-AStar.
Open Datasets No The paper mentions generating synthetic data and collecting real data from Massachusetts General Hospital and Mayo Clinic, but does not provide public access links, DOIs, or specific citations for these datasets to be publicly available.
Dataset Splits No The paper describes generating synthetic data for evaluation and mentions training data for deep learning methods, but it does not specify explicit training/validation/test splits for the proposed ESI-A algorithm.
Hardware Specification Yes The experiments were conducted on a Windows PC with i9 CPUs and 64 GB memory, and the deep learning models were trained using an NVIDIA V100 with 32 GB memory.
Software Dependencies No The paper mentions software like "Brainstorm", "MNE-Python", and "scikit-learn library" but does not provide specific version numbers for any of these components.
Experiment Setup Yes The ESI-A algorithm was optimized on (4) with k=7 when the activated area size is 2 cm and k=17 when the activated area size is 4 cm. For the Bi LSTM, the hidden layer contains 3200 LSTM units, connecting the input and output layers. The FNN has an input layer with a dimension of 128, and 3 hidden layers with 1280, 1280, and 2560 neurons in each layer, and the output layer s dimension is 2052.