Interpretable Nonlinear Dynamic Modeling of Neural Trajectories
Authors: Yuan Zhao, Il Memming Park
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our model can recover qualitative features of the phase portrait such as attractors, slow points, and bifurcations, while also producing reliable long-term future predictions in a variety of dynamical models and in real neural data. We apply the proposed method to a variety of low-dimensional neural models in theoretical neuroscience. Table 1: Model errors |
| Researcher Affiliation | Academia | Yuan Zhao and Il Memming Park Department of Neurobiology and Behavior Department of Applied Mathematics and Statistics Institute for Advanced Computational Science Stony Brook University, NY 11794 {yuan.zhao, memming.park}@stonybrook.edu |
| Pseudocode | No | No pseudocode or algorithm blocks are present in the paper. |
| Open Source Code | No | The paper mentions using TensorFlow [14] for implementation but does not provide access to its own source code for the methodology described. |
| Open Datasets | No | To test the model on data obtained from cortex, we use a set of trajectories obtained from the variational Gaussian latent process (vLGP) model [26]. |
| Dataset Splits | No | We use 19 trajectories for training and the last one for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'TensorFlow [14]' but does not provide specific version numbers for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | We estimate the parameters {Wg, WB, τ, c, σ} by minimizing the loss function through gradient descent (Adam [13]) implemented within Tensor Flow [14]. We initialize the matrices Wg and WB by truncated standard normal distribution, the centers {ci} by the centroids of the K-means clustering on the training set, and the kernel width σ by the average euclidean distance between the centers. The model with 10 basis functions learned the dynamics from 90 training trajectories (30 per coherence c = 0, 0.5, 0.5). We train the model with 50 basis functions on 100 simulated trajectories... The duration is 200 and the time step is 0.1. |