MiSO: Optimizing brain stimulation to create neural activity states

Authors: Yuki Minai, Joana Soldado-Magraner, Matthew Smith, Byron M Yu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We tested Mi SO in closed-loop experiments using electrical microstimulation in the prefrontal cortex of a non-human primate.
Researcher Affiliation Academia 1Neuroscience Institute, Carnegie Mellon University 2Machine Learning Department, Carnegie Mellon University 3Department of Biomedical Engineering, Carnegie Mellon University 4Department of Electrical and Computer Engineering, Carnegie Mellon University 5Center for the Neural Basis of Cognition {yminai,jsoldado,msmith,byronyu}@andrew.cmu.edu
Pseudocode No The paper describes the steps of the Mi SO framework but does not include formal pseudocode blocks or algorithm listings.
Open Source Code Yes Python code for Mi SO is available on Git Hub at https://github.com/yuumii-san/Mi SO.git.
Open Datasets No We tested Mi SO using electrical microstimulation (u Stim) in a macaque monkey... Experimental procedures were approved by the Institutional Animal Care and Use Committee (IACUC) of Carnegie Mellon University.
Dataset Splits Yes The merged stimulation-response samples across sessions were split into training, validation, and test sets with a ratio of 80:10:10.
Hardware Specification Yes We trained all models on a local computing cluster using 4 NVIDIA Ge Force RTX GPUs and 11GB of RAM.
Software Dependencies No The CNN and MLP were implemented in Py Torch and fit using the Adam optimizer with mean squared error loss and a learning rate of 0.001. The GP model was fitted using the GPy Torch library [32].
Experiment Setup Yes We set the learning rate αclip as 0.1, chosen manually by assessing how the latent activity state changed over trials, and ε as 0.05, chosen by running simulations with previously-collected stimulation-response samples.