Model Based Inference of Synaptic Plasticity Rules
Authors: Yash Mehta, Danil Tyulmankov, Adithya Rajagopalan, Glenn Turner, James Fitzgerald, Jan Funke
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our approach through simulations, successfully recovering established rules such as Oja s, as well as more intricate plasticity rules with reward-modulated terms. We assess the robustness of our technique to noise and apply it to behavioral data from Drosophila in a probabilistic reward-learning experiment. |
| Researcher Affiliation | Academia | 1Janelia Research Campus, Howard Hughes Medical Institute 2Department of Cognitive Science, Johns Hopkins University 3Center for Theoretical Neuroscience, Columbia University 4Viterbi School of Engineering, University of Southern California 5Center for Neural Science, New York University 6Department of Neurobiology, Northwestern University |
| Pseudocode | No | The paper describes the methodology and equations in text, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 2https://github.com/yashsmehta/Meta Learn Plasticity |
| Open Datasets | No | The paper uses synthetic neural activity and behavioral data from fruit flies collected by the authors ('For model fitting, we use data from 18 flies'), but does not provide concrete access information or citations to a publicly available dataset. |
| Dataset Splits | Yes | In our simulations, we use 18 trajectories for training (matching the size of our previous experimental data) and 7 for evaluation for each seed. |
| Hardware Specification | Yes | In terms of time, fitting Taylor coefficients on a network with approximately 10^6 synapses and 1000-time step trajectories takes 2 hours on an NVIDIA H100 GPU. |
| Software Dependencies | No | The paper states, 'Our framework is implemented in JAX (Bradbury et al., 2018)', but does not provide specific version numbers for JAX or other key software libraries. |
| Experiment Setup | Yes | To ensure numerical stability and prevent exploding gradients, gradient clipping is applied with a threshold of 0.2. The coefficients θαβγ of the Taylor series expansion representing the plasticity rule are learned, initialized independently and identically distributed (i.i.d.) from a normal distribution with a mean of 0 and a variance of 10^-4. (...) The Adam optimizer is used to train the weights of the plasticity model, with default parameters. |