Extracting Semantic-Dynamic Features for Long-Term Stable Brain Computer Interface

Authors: Tao Fang, Qian Zheng, Yu Qi, Gang Pan

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our recalibration approach achieves state-of-the-art performance on the real neural data of two monkeys in both classification and regression tasks. Our approach is also evaluated on a simulated dataset, which indicates its robustness in dealing with various common causes of neural signal instability.
Researcher Affiliation Academia 1The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China 2College of Computer Science and Technology, Zhejiang University, Hangzhou, China 3 MOE Frontier Science Center for Brain Science and Brain-machine Integration, Zhejiang University, Hangzhou, China
Pseudocode Yes Algorithm 1: Getting alignment loss terms of SD-Net at per single loop.
Open Source Code No The paper does not provide an unambiguous statement or link for the open-source code of the described methodology.
Open Datasets Yes For experiments we use the neural data published in (Dyer et al. 2017), as well as its corresponding movement trajectories Y s, Y t, and target point category Ls, Lt as labels.
Dataset Splits No The paper does not specify exact split percentages or sample counts for training, validation, and testing datasets, nor does it explicitly reference predefined splits with citations for reproducibility within the main text.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library names with their versions) that would be needed to replicate the experiment.
Experiment Setup Yes For other settings we select d = 50, b = 10 and T = 60, and λ1 = 0.1, λ2 = 0.1. Ot is trained with 10 batch size, 60 epochs, 0.002 learning rate and optimized by Adam. For Lalign,, we set λ3 = 0.1, λ4 = 0.1, λ5 = 0.1, λ6 = 1. We select the pseudo labels with the top 75% highest posterior probability and eliminate the others.