Multilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses
Authors: Eleanor Batty, Josh Merel, Nora Brackbill, Alexander Heitman, Alexander Sher, Alan Litke, E.J. Chichilnisky, Liam Paninski
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Here, we show that a multitask recurrent neural network (RNN) framework provides the flexibility necessary to model complex computations of neurons that cannot be captured by previous methods. Specifically, multilayer recurrent neural networks that share features across neurons outperform generalized linear models (GLMs) in predicting the spiking responses of parasol ganglion cells in the primate retina to natural images. The networks achieve good predictive performance given surprisingly small amounts of experimental training data. |
| Researcher Affiliation | Academia | Eleanor Batty, Josh Merel Doctoral Program in Neurobiology & Behavior, Columbia University erb2180@columbia.edu,jsmerel@gmail.com Nora Brackbill * Department of Physics, Stanford University nbrack@stanford.edu Alexander Heitman Neurosciences Graduate Program, University of California, San Diego alexkenheitman@gmail.com Alexander Sher & Alan Litke Santa Cruz Institute for Particle Physics, University of California, Santa Cruz sashake3@ucsc.edu, Alan.Litke@cern.ch E.J. Chichilnisky Department of Neurosurgery and Hansen Experimental Physics Laboratory, Stanford University ej@stanford.edu Liam Paninski Departments of Statistics and Neuroscience, Columbia University liam@stat.columbia.edu |
| Pseudocode | No | The paper describes model architectures using mathematical equations and textual explanations, but it does not contain a dedicated pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide a direct link to the source code for the implemented models, nor does it explicitly state that the code is publicly available. It mentions implementation in Theano but provides no specific repository for their code. |
| Open Datasets | Yes | The naturalistic movie stimulus consisted of images from the Van Hateren database shown for one second each, with spatial jitter based on eye movements during fixation by awake macaque monkeys (Z.M. Hafed and R.J. Krauzlis, personal communication), (van Hateren & van der Schaaf, 1998). An example stimulus can be found at https://youtu.be/s G_18Uz_6OE. |
| Dataset Splits | Yes | We used the same split of training and validation data for both experiments: 104 thirty-second movies as training data and 10 thirty-second movies as a held-out validation set. |
| Hardware Specification | No | All models were implemented in Theano and trained on a combination of CPUs and GPUs (Theano Development Team, 2016). This statement is too general and does not specify particular models or quantities of CPUs/GPUs. |
| Software Dependencies | No | All models were implemented in Theano and trained on a combination of CPUs and GPUs (Theano Development Team, 2016). Training was performed using the Adam optimizer on the mean squared error (MSE) between predicted firing rate and true spikes (Kingma & Ba, 2014). While Theano and Adam optimizer are mentioned, specific version numbers for any software dependencies are not provided. |
| Experiment Setup | Yes | Training was performed using the Adam optimizer on the mean squared error (MSE) between predicted firing rate and true spikes (Kingma & Ba, 2014). All recurrent dynamics and temporal filters operated on time bins of 8.33 ms (the frame rate of the movie). ... We train for a maximum of 150 epochs, where we define one epoch as one pass through all the training data. ... The linear-nonlinear model (LN) consists of a spatiotemporal filtering of the 31x31x30 movie patch... We used a rank 1 approximation... resulting in a vectorized 31x31 spatial filter (ws) and a 30 bin temporal filter (wt) which spans 250 ms... The post-spike history filter consists of a weighted sum of a basis of 20 raised cosines spanning approximately 100 ms... The nonlinearity is the logistic sigmoid: f = L/(1 + exp( x)). ... Two-layer RNN, 50 units... |