Dynamic Ensemble Modeling Approach to Nonstationary Neural Decoding in Brain-Computer Interfaces

Authors: Yu Qi, Bin Liu, Yueming Wang, Gang Pan

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments with neural data demonstrate that the Dy Ensemble method outperforms Kalman filters remarkably, and its advantage is more obvious with noisy signals. Experiments are carried out with both simulation data and neural signal data.
Researcher Affiliation Academia 1 College of Computer Science and Technology, Zhejiang University 2 School of Computer Science, Nanjing University of Posts and Telecommunications 3 Qiushi Academy for Advanced Studies, Zhejiang University 4 State Key Lab of CAD&CG, Zhejiang University
Pseudocode Yes Algorithm 1 Candidate Model Generation Strategy.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets No The paper states: "The neural signal dataset includes two rats, for each rat, the data length is about 400 seconds. We use the first 200 seconds for training and the last 100 seconds for test." However, it describes a custom dataset generated from neural recordings and does not provide concrete access information (e.g., a link, DOI, or a formal citation to a publicly available dataset with author/year).
Dataset Splits Yes All the methods are carefully tuned by validation and the validation set is the last 400 points of training data.
Hardware Specification No The paper mentions: "Neural signals were recorded by a Cerebus T M system at a sampling rate of 30 k Hz." and "16-channel microwire electrode arrays (8 2, diameter = 35 µm) were implanted...". While these are specific pieces of hardware used in the experimental setup for data acquisition, the paper does not specify the computational hardware (e.g., GPU models, CPU types, or memory) used for running the actual decoding models and experiments.
Software Dependencies No The paper mentions using methods like "Kalman filter" and "long short-term memory (LSTM)" but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow, or specific library versions).
Experiment Setup Yes For LSTM, we use a 1-hidden-layer model with 8 hidden neurons. In Dy Ensemble, the forgetting factor α is 0.1, model number M is 20, weight perturbation factor p is 0.1, and the particle number is 1000. For Dy Ensemble-2 and Dy Ensemble-5, the model sizes are 18 and 15, respectively.